00:00:00.000 Started by upstream project "autotest-per-patch" build number 132309 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.155 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.156 The recommended git tool is: git 00:00:00.156 using credential 00000000-0000-0000-0000-000000000002 00:00:00.158 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.176 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.199 Using shallow fetch with depth 1 00:00:00.199 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.199 > git --version # timeout=10 00:00:00.218 > git --version # 'git version 2.39.2' 00:00:00.218 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.884 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.896 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.907 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.907 > git config core.sparsecheckout # timeout=10 00:00:05.918 > git read-tree -mu HEAD # timeout=10 00:00:05.936 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.954 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.954 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.150 [Pipeline] Start of Pipeline 00:00:06.164 [Pipeline] library 00:00:06.166 Loading library shm_lib@master 00:00:06.166 Library shm_lib@master is cached. Copying from home. 00:00:06.182 [Pipeline] node 00:00:06.210 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:06.213 [Pipeline] { 00:00:06.223 [Pipeline] catchError 00:00:06.224 [Pipeline] { 00:00:06.241 [Pipeline] wrap 00:00:06.247 [Pipeline] { 00:00:06.253 [Pipeline] stage 00:00:06.255 [Pipeline] { (Prologue) 00:00:06.268 [Pipeline] echo 00:00:06.269 Node: VM-host-SM38 00:00:06.273 [Pipeline] cleanWs 00:00:06.281 [WS-CLEANUP] Deleting project workspace... 00:00:06.282 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.288 [WS-CLEANUP] done 00:00:06.485 [Pipeline] setCustomBuildProperty 00:00:06.553 [Pipeline] httpRequest 00:00:06.891 [Pipeline] echo 00:00:06.893 Sorcerer 10.211.164.20 is alive 00:00:06.903 [Pipeline] retry 00:00:06.905 [Pipeline] { 00:00:06.916 [Pipeline] httpRequest 00:00:06.921 HttpMethod: GET 00:00:06.921 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.922 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.923 Response Code: HTTP/1.1 200 OK 00:00:06.923 Success: Status code 200 is in the accepted range: 200,404 00:00:06.924 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.976 [Pipeline] } 00:00:07.992 [Pipeline] // retry 00:00:07.999 [Pipeline] sh 00:00:08.287 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.306 [Pipeline] httpRequest 00:00:08.799 [Pipeline] echo 00:00:08.800 Sorcerer 10.211.164.20 is alive 00:00:08.809 [Pipeline] retry 00:00:08.811 [Pipeline] { 00:00:08.823 [Pipeline] httpRequest 00:00:08.827 HttpMethod: GET 00:00:08.828 URL: http://10.211.164.20/packages/spdk_403bf887ac6ce76246bbf3c9eb1f45699885908f.tar.gz 00:00:08.829 Sending request to url: http://10.211.164.20/packages/spdk_403bf887ac6ce76246bbf3c9eb1f45699885908f.tar.gz 00:00:08.845 Response Code: HTTP/1.1 200 OK 00:00:08.846 Success: Status code 200 is in the accepted range: 200,404 00:00:08.847 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_403bf887ac6ce76246bbf3c9eb1f45699885908f.tar.gz 00:01:12.996 [Pipeline] } 00:01:13.013 [Pipeline] // retry 00:01:13.021 [Pipeline] sh 00:01:13.307 + tar --no-same-owner -xf spdk_403bf887ac6ce76246bbf3c9eb1f45699885908f.tar.gz 00:01:16.626 [Pipeline] sh 00:01:16.912 + git -C spdk log --oneline -n5 00:01:16.912 403bf887a nvmf: added support for add/delete host wrt referral 00:01:16.912 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:01:16.912 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:01:16.912 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:01:16.912 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:01:16.932 [Pipeline] writeFile 00:01:16.948 [Pipeline] sh 00:01:17.236 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.250 [Pipeline] sh 00:01:17.534 + cat autorun-spdk.conf 00:01:17.534 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.534 SPDK_TEST_NVME=1 00:01:17.534 SPDK_TEST_FTL=1 00:01:17.534 SPDK_TEST_ISAL=1 00:01:17.534 SPDK_RUN_ASAN=1 00:01:17.534 SPDK_RUN_UBSAN=1 00:01:17.534 SPDK_TEST_XNVME=1 00:01:17.534 SPDK_TEST_NVME_FDP=1 00:01:17.534 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.544 RUN_NIGHTLY=0 00:01:17.546 [Pipeline] } 00:01:17.559 [Pipeline] // stage 00:01:17.575 [Pipeline] stage 00:01:17.585 [Pipeline] { (Run VM) 00:01:17.605 [Pipeline] sh 00:01:17.893 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.893 + echo 'Start stage prepare_nvme.sh' 00:01:17.893 Start stage prepare_nvme.sh 00:01:17.893 + [[ -n 7 ]] 00:01:17.893 + disk_prefix=ex7 00:01:17.893 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:17.893 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:17.893 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:17.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.893 ++ SPDK_TEST_NVME=1 00:01:17.893 ++ SPDK_TEST_FTL=1 00:01:17.893 ++ SPDK_TEST_ISAL=1 00:01:17.893 ++ SPDK_RUN_ASAN=1 00:01:17.893 ++ SPDK_RUN_UBSAN=1 00:01:17.893 ++ SPDK_TEST_XNVME=1 00:01:17.893 ++ SPDK_TEST_NVME_FDP=1 00:01:17.893 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.893 ++ RUN_NIGHTLY=0 00:01:17.893 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:17.893 + nvme_files=() 00:01:17.893 + declare -A nvme_files 00:01:17.893 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.893 + nvme_files['nvme.img']=5G 00:01:17.893 + nvme_files['nvme-cmb.img']=5G 00:01:17.893 + nvme_files['nvme-multi0.img']=4G 00:01:17.893 + nvme_files['nvme-multi1.img']=4G 00:01:17.893 + nvme_files['nvme-multi2.img']=4G 00:01:17.893 + nvme_files['nvme-openstack.img']=8G 00:01:17.893 + nvme_files['nvme-zns.img']=5G 00:01:17.893 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.893 + (( SPDK_TEST_FTL == 1 )) 00:01:17.893 + nvme_files["nvme-ftl.img"]=6G 00:01:17.893 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.893 + nvme_files["nvme-fdp.img"]=1G 00:01:17.893 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.893 + for nvme in "${!nvme_files[@]}" 00:01:17.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:17.893 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.893 + for nvme in "${!nvme_files[@]}" 00:01:17.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:01:17.893 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:17.893 + for nvme in "${!nvme_files[@]}" 00:01:17.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:17.893 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.893 + for nvme in "${!nvme_files[@]}" 00:01:17.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:17.893 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:17.893 + for nvme in "${!nvme_files[@]}" 00:01:17.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:17.893 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.893 + for nvme in "${!nvme_files[@]}" 00:01:17.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:18.155 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.155 + for nvme in "${!nvme_files[@]}" 00:01:18.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:18.155 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.155 + for nvme in "${!nvme_files[@]}" 00:01:18.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:01:18.155 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:18.155 + for nvme in "${!nvme_files[@]}" 00:01:18.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:18.155 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.155 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:18.155 + echo 'End stage prepare_nvme.sh' 00:01:18.155 End stage prepare_nvme.sh 00:01:18.168 [Pipeline] sh 00:01:18.452 + DISTRO=fedora39 00:01:18.452 + CPUS=10 00:01:18.452 + RAM=12288 00:01:18.452 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:18.452 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:18.452 00:01:18.452 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:18.452 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:18.452 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:18.452 HELP=0 00:01:18.452 DRY_RUN=0 00:01:18.452 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:01:18.452 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:18.452 NVME_AUTO_CREATE=0 00:01:18.452 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:01:18.452 NVME_CMB=,,,, 00:01:18.452 NVME_PMR=,,,, 00:01:18.452 NVME_ZNS=,,,, 00:01:18.452 NVME_MS=true,,,, 00:01:18.452 NVME_FDP=,,,on, 00:01:18.452 SPDK_VAGRANT_DISTRO=fedora39 00:01:18.452 SPDK_VAGRANT_VMCPU=10 00:01:18.452 SPDK_VAGRANT_VMRAM=12288 00:01:18.452 SPDK_VAGRANT_PROVIDER=libvirt 00:01:18.452 SPDK_VAGRANT_HTTP_PROXY= 00:01:18.452 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:18.452 SPDK_OPENSTACK_NETWORK=0 00:01:18.452 VAGRANT_PACKAGE_BOX=0 00:01:18.452 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:18.452 FORCE_DISTRO=true 00:01:18.452 VAGRANT_BOX_VERSION= 00:01:18.452 EXTRA_VAGRANTFILES= 00:01:18.452 NIC_MODEL=e1000 00:01:18.452 00:01:18.452 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:01:18.452 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:21.055 Bringing machine 'default' up with 'libvirt' provider... 00:01:21.317 ==> default: Creating image (snapshot of base box volume). 00:01:21.317 ==> default: Creating domain with the following settings... 00:01:21.317 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731930258_479ed12216a28b9be83c 00:01:21.317 ==> default: -- Domain type: kvm 00:01:21.317 ==> default: -- Cpus: 10 00:01:21.317 ==> default: -- Feature: acpi 00:01:21.317 ==> default: -- Feature: apic 00:01:21.317 ==> default: -- Feature: pae 00:01:21.317 ==> default: -- Memory: 12288M 00:01:21.317 ==> default: -- Memory Backing: hugepages: 00:01:21.317 ==> default: -- Management MAC: 00:01:21.317 ==> default: -- Loader: 00:01:21.317 ==> default: -- Nvram: 00:01:21.317 ==> default: -- Base box: spdk/fedora39 00:01:21.317 ==> default: -- Storage pool: default 00:01:21.317 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731930258_479ed12216a28b9be83c.img (20G) 00:01:21.317 ==> default: -- Volume Cache: default 00:01:21.317 ==> default: -- Kernel: 00:01:21.317 ==> default: -- Initrd: 00:01:21.317 ==> default: -- Graphics Type: vnc 00:01:21.317 ==> default: -- Graphics Port: -1 00:01:21.317 ==> default: -- Graphics IP: 127.0.0.1 00:01:21.317 ==> default: -- Graphics Password: Not defined 00:01:21.317 ==> default: -- Video Type: cirrus 00:01:21.317 ==> default: -- Video VRAM: 9216 00:01:21.317 ==> default: -- Sound Type: 00:01:21.317 ==> default: -- Keymap: en-us 00:01:21.317 ==> default: -- TPM Path: 00:01:21.317 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:21.317 ==> default: -- Command line args: 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:21.317 ==> default: -> value=-drive, 00:01:21.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:21.317 ==> default: -> value=-drive, 00:01:21.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:21.317 ==> default: -> value=-drive, 00:01:21.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.317 ==> default: -> value=-drive, 00:01:21.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.317 ==> default: -> value=-drive, 00:01:21.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:21.317 ==> default: -> value=-drive, 00:01:21.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:21.317 ==> default: -> value=-device, 00:01:21.317 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.317 ==> default: Creating shared folders metadata... 00:01:21.317 ==> default: Starting domain. 00:01:23.233 ==> default: Waiting for domain to get an IP address... 00:01:41.383 ==> default: Waiting for SSH to become available... 00:01:41.383 ==> default: Configuring and enabling network interfaces... 00:01:44.686 default: SSH address: 192.168.121.117:22 00:01:44.686 default: SSH username: vagrant 00:01:44.686 default: SSH auth method: private key 00:01:47.231 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.378 ==> default: Mounting SSHFS shared folder... 00:01:57.928 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.928 ==> default: Checking Mount.. 00:01:58.872 ==> default: Folder Successfully Mounted! 00:01:58.872 00:01:58.872 SUCCESS! 00:01:58.872 00:01:58.872 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:58.872 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:58.872 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:58.872 00:01:58.883 [Pipeline] } 00:01:58.899 [Pipeline] // stage 00:01:58.909 [Pipeline] dir 00:01:58.910 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:58.911 [Pipeline] { 00:01:58.924 [Pipeline] catchError 00:01:58.925 [Pipeline] { 00:01:58.939 [Pipeline] sh 00:01:59.227 + vagrant ssh-config --host vagrant 00:01:59.227 + sed -ne '/^Host/,$p' 00:01:59.227 + tee ssh_conf 00:02:02.529 Host vagrant 00:02:02.529 HostName 192.168.121.117 00:02:02.529 User vagrant 00:02:02.529 Port 22 00:02:02.529 UserKnownHostsFile /dev/null 00:02:02.529 StrictHostKeyChecking no 00:02:02.529 PasswordAuthentication no 00:02:02.529 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:02.529 IdentitiesOnly yes 00:02:02.529 LogLevel FATAL 00:02:02.529 ForwardAgent yes 00:02:02.529 ForwardX11 yes 00:02:02.529 00:02:02.542 [Pipeline] withEnv 00:02:02.547 [Pipeline] { 00:02:02.564 [Pipeline] sh 00:02:02.846 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:02.846 source /etc/os-release 00:02:02.846 [[ -e /image.version ]] && img=$(< /image.version) 00:02:02.846 # Minimal, systemd-like check. 00:02:02.846 if [[ -e /.dockerenv ]]; then 00:02:02.846 # Clear garbage from the node'\''s name: 00:02:02.846 # agt-er_autotest_547-896 -> autotest_547-896 00:02:02.846 # $HOSTNAME is the actual container id 00:02:02.846 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:02.846 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:02.846 # We can assume this is a mount from a host where container is running, 00:02:02.846 # so fetch its hostname to easily identify the target swarm worker. 00:02:02.846 container="$(< /etc/hostname) ($agent)" 00:02:02.846 else 00:02:02.846 # Fallback 00:02:02.846 container=$agent 00:02:02.846 fi 00:02:02.846 fi 00:02:02.846 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:02.846 ' 00:02:03.117 [Pipeline] } 00:02:03.131 [Pipeline] // withEnv 00:02:03.138 [Pipeline] setCustomBuildProperty 00:02:03.151 [Pipeline] stage 00:02:03.153 [Pipeline] { (Tests) 00:02:03.169 [Pipeline] sh 00:02:03.450 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:03.727 [Pipeline] sh 00:02:04.014 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.291 [Pipeline] timeout 00:02:04.292 Timeout set to expire in 50 min 00:02:04.294 [Pipeline] { 00:02:04.309 [Pipeline] sh 00:02:04.593 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:05.186 HEAD is now at 403bf887a nvmf: added support for add/delete host wrt referral 00:02:05.201 [Pipeline] sh 00:02:05.488 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:05.765 [Pipeline] sh 00:02:06.050 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:06.328 [Pipeline] sh 00:02:06.612 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:06.877 ++ readlink -f spdk_repo 00:02:06.877 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:06.877 + [[ -n /home/vagrant/spdk_repo ]] 00:02:06.877 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:06.877 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:06.877 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:06.877 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:06.877 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:06.877 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:06.877 + cd /home/vagrant/spdk_repo 00:02:06.877 + source /etc/os-release 00:02:06.877 ++ NAME='Fedora Linux' 00:02:06.877 ++ VERSION='39 (Cloud Edition)' 00:02:06.877 ++ ID=fedora 00:02:06.877 ++ VERSION_ID=39 00:02:06.877 ++ VERSION_CODENAME= 00:02:06.877 ++ PLATFORM_ID=platform:f39 00:02:06.877 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:06.877 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.877 ++ LOGO=fedora-logo-icon 00:02:06.877 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:06.877 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.877 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:06.877 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.877 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.877 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.877 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:06.877 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.877 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:06.877 ++ SUPPORT_END=2024-11-12 00:02:06.877 ++ VARIANT='Cloud Edition' 00:02:06.877 ++ VARIANT_ID=cloud 00:02:06.877 + uname -a 00:02:06.877 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:06.877 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:07.399 Hugepages 00:02:07.399 node hugesize free / total 00:02:07.399 node0 1048576kB 0 / 0 00:02:07.399 node0 2048kB 0 / 0 00:02:07.399 00:02:07.399 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.399 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.661 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.661 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:07.661 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:07.661 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:07.661 + rm -f /tmp/spdk-ld-path 00:02:07.661 + source autorun-spdk.conf 00:02:07.661 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.661 ++ SPDK_TEST_NVME=1 00:02:07.661 ++ SPDK_TEST_FTL=1 00:02:07.661 ++ SPDK_TEST_ISAL=1 00:02:07.661 ++ SPDK_RUN_ASAN=1 00:02:07.661 ++ SPDK_RUN_UBSAN=1 00:02:07.661 ++ SPDK_TEST_XNVME=1 00:02:07.661 ++ SPDK_TEST_NVME_FDP=1 00:02:07.661 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.661 ++ RUN_NIGHTLY=0 00:02:07.661 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.661 + [[ -n '' ]] 00:02:07.661 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:07.661 + for M in /var/spdk/build-*-manifest.txt 00:02:07.661 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.661 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.661 + for M in /var/spdk/build-*-manifest.txt 00:02:07.661 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.661 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.661 + for M in /var/spdk/build-*-manifest.txt 00:02:07.661 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.661 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.661 ++ uname 00:02:07.661 + [[ Linux == \L\i\n\u\x ]] 00:02:07.661 + sudo dmesg -T 00:02:07.661 + sudo dmesg --clear 00:02:07.661 + dmesg_pid=5023 00:02:07.661 + [[ Fedora Linux == FreeBSD ]] 00:02:07.661 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.661 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.661 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.661 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.661 + sudo dmesg -Tw 00:02:07.661 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.661 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.661 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.661 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.661 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.661 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.661 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.661 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.661 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.661 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.661 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.923 11:45:05 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:07.923 11:45:05 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.923 11:45:05 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:07.923 11:45:05 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:07.923 11:45:05 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.923 11:45:05 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:07.923 11:45:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:07.923 11:45:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:07.923 11:45:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.923 11:45:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.923 11:45:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.923 11:45:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.923 11:45:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.923 11:45:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.923 11:45:05 -- paths/export.sh@5 -- $ export PATH 00:02:07.923 11:45:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.923 11:45:05 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:07.923 11:45:05 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:07.923 11:45:05 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731930305.XXXXXX 00:02:07.923 11:45:05 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731930305.QSTBxc 00:02:07.923 11:45:05 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:07.923 11:45:05 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:07.923 11:45:05 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:07.923 11:45:05 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:07.923 11:45:05 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.923 11:45:05 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:07.923 11:45:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:07.923 11:45:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.923 11:45:05 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:07.923 11:45:05 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:07.923 11:45:05 -- pm/common@17 -- $ local monitor 00:02:07.923 11:45:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.923 11:45:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.923 11:45:05 -- pm/common@25 -- $ sleep 1 00:02:07.923 11:45:05 -- pm/common@21 -- $ date +%s 00:02:07.923 11:45:05 -- pm/common@21 -- $ date +%s 00:02:07.923 11:45:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731930305 00:02:07.923 11:45:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731930305 00:02:07.923 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731930305_collect-vmstat.pm.log 00:02:07.923 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731930305_collect-cpu-load.pm.log 00:02:08.867 11:45:06 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:08.867 11:45:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.867 11:45:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.867 11:45:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.867 11:45:06 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.867 Mon Nov 18 11:45:06 AM UTC 2024 00:02:08.867 11:45:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.867 v25.01-pre-159-g403bf887a 00:02:08.867 11:45:06 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:08.867 11:45:06 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:08.867 11:45:06 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:08.867 11:45:06 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:08.867 11:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.867 ************************************ 00:02:08.867 START TEST asan 00:02:08.867 ************************************ 00:02:08.867 using asan 00:02:08.867 11:45:06 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:08.867 00:02:08.867 real 0m0.000s 00:02:08.867 user 0m0.000s 00:02:08.867 sys 0m0.000s 00:02:08.867 11:45:06 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:08.867 ************************************ 00:02:08.867 END TEST asan 00:02:08.867 ************************************ 00:02:08.867 11:45:06 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.867 11:45:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.867 11:45:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.867 11:45:06 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:08.867 11:45:06 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:08.867 11:45:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.867 ************************************ 00:02:08.867 START TEST ubsan 00:02:08.867 ************************************ 00:02:08.867 using ubsan 00:02:08.867 ************************************ 00:02:08.867 END TEST ubsan 00:02:08.867 ************************************ 00:02:08.867 11:45:06 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:08.867 00:02:08.867 real 0m0.000s 00:02:08.867 user 0m0.000s 00:02:08.867 sys 0m0.000s 00:02:08.867 11:45:06 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:08.867 11:45:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.128 11:45:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.128 11:45:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.128 11:45:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.128 11:45:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.128 11:45:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.128 11:45:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.128 11:45:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.128 11:45:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.128 11:45:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:09.128 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.128 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.388 Using 'verbs' RDMA provider 00:02:20.345 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:32.581 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:32.581 Creating mk/config.mk...done. 00:02:32.581 Creating mk/cc.flags.mk...done. 00:02:32.581 Type 'make' to build. 00:02:32.581 11:45:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:32.582 11:45:29 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:32.582 11:45:29 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:32.582 11:45:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.582 ************************************ 00:02:32.582 START TEST make 00:02:32.582 ************************************ 00:02:32.582 11:45:29 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:32.582 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:32.582 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:32.582 meson setup builddir \ 00:02:32.582 -Dwith-libaio=enabled \ 00:02:32.582 -Dwith-liburing=enabled \ 00:02:32.582 -Dwith-libvfn=disabled \ 00:02:32.582 -Dwith-spdk=disabled \ 00:02:32.582 -Dexamples=false \ 00:02:32.582 -Dtests=false \ 00:02:32.582 -Dtools=false && \ 00:02:32.582 meson compile -C builddir && \ 00:02:32.582 cd -) 00:02:32.582 make[1]: Nothing to be done for 'all'. 00:02:33.962 The Meson build system 00:02:33.962 Version: 1.5.0 00:02:33.962 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:33.962 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:33.962 Build type: native build 00:02:33.962 Project name: xnvme 00:02:33.962 Project version: 0.7.5 00:02:33.962 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.962 C linker for the host machine: cc ld.bfd 2.40-14 00:02:33.962 Host machine cpu family: x86_64 00:02:33.962 Host machine cpu: x86_64 00:02:33.962 Message: host_machine.system: linux 00:02:33.962 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:33.962 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:33.962 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:33.962 Run-time dependency threads found: YES 00:02:33.962 Has header "setupapi.h" : NO 00:02:33.962 Has header "linux/blkzoned.h" : YES 00:02:33.962 Has header "linux/blkzoned.h" : YES (cached) 00:02:33.962 Has header "libaio.h" : YES 00:02:33.962 Library aio found: YES 00:02:33.962 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.962 Run-time dependency liburing found: YES 2.2 00:02:33.962 Dependency libvfn skipped: feature with-libvfn disabled 00:02:33.962 Found CMake: /usr/bin/cmake (3.27.7) 00:02:33.962 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:33.962 Subproject spdk : skipped: feature with-spdk disabled 00:02:33.962 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.962 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.962 Library rt found: YES 00:02:33.962 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:33.962 Configuring xnvme_config.h using configuration 00:02:33.962 Configuring xnvme.spec using configuration 00:02:33.962 Run-time dependency bash-completion found: YES 2.11 00:02:33.962 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:33.962 Program cp found: YES (/usr/bin/cp) 00:02:33.962 Build targets in project: 3 00:02:33.962 00:02:33.962 xnvme 0.7.5 00:02:33.962 00:02:33.962 Subprojects 00:02:33.962 spdk : NO Feature 'with-spdk' disabled 00:02:33.962 00:02:33.962 User defined options 00:02:33.962 examples : false 00:02:33.962 tests : false 00:02:33.962 tools : false 00:02:33.962 with-libaio : enabled 00:02:33.962 with-liburing: enabled 00:02:33.962 with-libvfn : disabled 00:02:33.962 with-spdk : disabled 00:02:33.962 00:02:33.962 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.529 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:34.529 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:34.529 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:34.529 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:34.529 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:34.529 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:34.529 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:34.529 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:34.529 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:34.529 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:34.529 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:34.529 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:34.529 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:34.529 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:34.529 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:34.529 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:34.529 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:34.529 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:34.529 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:34.787 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:34.787 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:34.787 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:34.787 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:34.787 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:34.787 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:34.787 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:34.787 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:34.787 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:34.787 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:34.787 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:34.787 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:34.787 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:34.787 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:34.787 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:34.787 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:34.787 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:34.787 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:34.787 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:34.787 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:34.787 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:34.787 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:34.787 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:34.787 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:34.787 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:34.787 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:34.787 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:34.787 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:34.787 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:34.787 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:34.787 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:34.787 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:34.787 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:34.787 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:34.787 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:35.046 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:35.046 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:35.046 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:35.046 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:35.046 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:35.046 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:35.046 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:35.046 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:35.046 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:35.046 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:35.046 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:35.046 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:35.046 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:35.046 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:35.046 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:35.046 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:35.046 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:35.046 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:35.304 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:35.304 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:35.561 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:35.562 [75/76] Linking static target lib/libxnvme.a 00:02:35.562 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:35.562 INFO: autodetecting backend as ninja 00:02:35.562 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:35.562 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:42.196 The Meson build system 00:02:42.196 Version: 1.5.0 00:02:42.196 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:42.196 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:42.196 Build type: native build 00:02:42.196 Program cat found: YES (/usr/bin/cat) 00:02:42.196 Project name: DPDK 00:02:42.196 Project version: 24.03.0 00:02:42.196 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.196 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.196 Host machine cpu family: x86_64 00:02:42.196 Host machine cpu: x86_64 00:02:42.196 Message: ## Building in Developer Mode ## 00:02:42.196 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.196 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:42.196 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.196 Program python3 found: YES (/usr/bin/python3) 00:02:42.196 Program cat found: YES (/usr/bin/cat) 00:02:42.196 Compiler for C supports arguments -march=native: YES 00:02:42.196 Checking for size of "void *" : 8 00:02:42.196 Checking for size of "void *" : 8 (cached) 00:02:42.196 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:42.196 Library m found: YES 00:02:42.196 Library numa found: YES 00:02:42.196 Has header "numaif.h" : YES 00:02:42.196 Library fdt found: NO 00:02:42.196 Library execinfo found: NO 00:02:42.196 Has header "execinfo.h" : YES 00:02:42.196 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.196 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.196 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.196 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.196 Run-time dependency openssl found: YES 3.1.1 00:02:42.196 Run-time dependency libpcap found: YES 1.10.4 00:02:42.196 Has header "pcap.h" with dependency libpcap: YES 00:02:42.196 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.196 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.196 Compiler for C supports arguments -Wformat: YES 00:02:42.196 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.196 Compiler for C supports arguments -Wformat-security: NO 00:02:42.196 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.196 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.196 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.196 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.196 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.196 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.196 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.196 Compiler for C supports arguments -Wundef: YES 00:02:42.196 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.196 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.196 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.196 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.196 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.196 Program objdump found: YES (/usr/bin/objdump) 00:02:42.196 Compiler for C supports arguments -mavx512f: YES 00:02:42.196 Checking if "AVX512 checking" compiles: YES 00:02:42.196 Fetching value of define "__SSE4_2__" : 1 00:02:42.196 Fetching value of define "__AES__" : 1 00:02:42.196 Fetching value of define "__AVX__" : 1 00:02:42.196 Fetching value of define "__AVX2__" : 1 00:02:42.196 Fetching value of define "__AVX512BW__" : 1 00:02:42.196 Fetching value of define "__AVX512CD__" : 1 00:02:42.196 Fetching value of define "__AVX512DQ__" : 1 00:02:42.196 Fetching value of define "__AVX512F__" : 1 00:02:42.196 Fetching value of define "__AVX512VL__" : 1 00:02:42.196 Fetching value of define "__PCLMUL__" : 1 00:02:42.196 Fetching value of define "__RDRND__" : 1 00:02:42.196 Fetching value of define "__RDSEED__" : 1 00:02:42.196 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:42.196 Fetching value of define "__znver1__" : (undefined) 00:02:42.196 Fetching value of define "__znver2__" : (undefined) 00:02:42.196 Fetching value of define "__znver3__" : (undefined) 00:02:42.196 Fetching value of define "__znver4__" : (undefined) 00:02:42.196 Library asan found: YES 00:02:42.196 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.196 Message: lib/log: Defining dependency "log" 00:02:42.196 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.196 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.196 Library rt found: YES 00:02:42.196 Checking for function "getentropy" : NO 00:02:42.196 Message: lib/eal: Defining dependency "eal" 00:02:42.196 Message: lib/ring: Defining dependency "ring" 00:02:42.196 Message: lib/rcu: Defining dependency "rcu" 00:02:42.196 Message: lib/mempool: Defining dependency "mempool" 00:02:42.196 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.196 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.196 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:42.196 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:42.196 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:42.196 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:42.196 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:42.196 Compiler for C supports arguments -mpclmul: YES 00:02:42.196 Compiler for C supports arguments -maes: YES 00:02:42.196 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.196 Compiler for C supports arguments -mavx512bw: YES 00:02:42.196 Compiler for C supports arguments -mavx512dq: YES 00:02:42.196 Compiler for C supports arguments -mavx512vl: YES 00:02:42.196 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.196 Compiler for C supports arguments -mavx2: YES 00:02:42.196 Compiler for C supports arguments -mavx: YES 00:02:42.196 Message: lib/net: Defining dependency "net" 00:02:42.196 Message: lib/meter: Defining dependency "meter" 00:02:42.196 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.196 Message: lib/pci: Defining dependency "pci" 00:02:42.196 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.196 Message: lib/hash: Defining dependency "hash" 00:02:42.196 Message: lib/timer: Defining dependency "timer" 00:02:42.196 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.196 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.196 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.196 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.196 Message: lib/power: Defining dependency "power" 00:02:42.196 Message: lib/reorder: Defining dependency "reorder" 00:02:42.196 Message: lib/security: Defining dependency "security" 00:02:42.196 Has header "linux/userfaultfd.h" : YES 00:02:42.196 Has header "linux/vduse.h" : YES 00:02:42.196 Message: lib/vhost: Defining dependency "vhost" 00:02:42.196 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.196 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.196 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.196 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.196 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:42.196 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:42.196 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:42.196 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:42.196 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:42.196 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:42.196 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:42.197 Configuring doxy-api-html.conf using configuration 00:02:42.197 Configuring doxy-api-man.conf using configuration 00:02:42.197 Program mandb found: YES (/usr/bin/mandb) 00:02:42.197 Program sphinx-build found: NO 00:02:42.197 Configuring rte_build_config.h using configuration 00:02:42.197 Message: 00:02:42.197 ================= 00:02:42.197 Applications Enabled 00:02:42.197 ================= 00:02:42.197 00:02:42.197 apps: 00:02:42.197 00:02:42.197 00:02:42.197 Message: 00:02:42.197 ================= 00:02:42.197 Libraries Enabled 00:02:42.197 ================= 00:02:42.197 00:02:42.197 libs: 00:02:42.197 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:42.197 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:42.197 cryptodev, dmadev, power, reorder, security, vhost, 00:02:42.197 00:02:42.197 Message: 00:02:42.197 =============== 00:02:42.197 Drivers Enabled 00:02:42.197 =============== 00:02:42.197 00:02:42.197 common: 00:02:42.197 00:02:42.197 bus: 00:02:42.197 pci, vdev, 00:02:42.197 mempool: 00:02:42.197 ring, 00:02:42.197 dma: 00:02:42.197 00:02:42.197 net: 00:02:42.197 00:02:42.197 crypto: 00:02:42.197 00:02:42.197 compress: 00:02:42.197 00:02:42.197 vdpa: 00:02:42.197 00:02:42.197 00:02:42.197 Message: 00:02:42.197 ================= 00:02:42.197 Content Skipped 00:02:42.197 ================= 00:02:42.197 00:02:42.197 apps: 00:02:42.197 dumpcap: explicitly disabled via build config 00:02:42.197 graph: explicitly disabled via build config 00:02:42.197 pdump: explicitly disabled via build config 00:02:42.197 proc-info: explicitly disabled via build config 00:02:42.197 test-acl: explicitly disabled via build config 00:02:42.197 test-bbdev: explicitly disabled via build config 00:02:42.197 test-cmdline: explicitly disabled via build config 00:02:42.197 test-compress-perf: explicitly disabled via build config 00:02:42.197 test-crypto-perf: explicitly disabled via build config 00:02:42.197 test-dma-perf: explicitly disabled via build config 00:02:42.197 test-eventdev: explicitly disabled via build config 00:02:42.197 test-fib: explicitly disabled via build config 00:02:42.197 test-flow-perf: explicitly disabled via build config 00:02:42.197 test-gpudev: explicitly disabled via build config 00:02:42.197 test-mldev: explicitly disabled via build config 00:02:42.197 test-pipeline: explicitly disabled via build config 00:02:42.197 test-pmd: explicitly disabled via build config 00:02:42.197 test-regex: explicitly disabled via build config 00:02:42.197 test-sad: explicitly disabled via build config 00:02:42.197 test-security-perf: explicitly disabled via build config 00:02:42.197 00:02:42.197 libs: 00:02:42.197 argparse: explicitly disabled via build config 00:02:42.197 metrics: explicitly disabled via build config 00:02:42.197 acl: explicitly disabled via build config 00:02:42.197 bbdev: explicitly disabled via build config 00:02:42.197 bitratestats: explicitly disabled via build config 00:02:42.197 bpf: explicitly disabled via build config 00:02:42.197 cfgfile: explicitly disabled via build config 00:02:42.197 distributor: explicitly disabled via build config 00:02:42.197 efd: explicitly disabled via build config 00:02:42.197 eventdev: explicitly disabled via build config 00:02:42.197 dispatcher: explicitly disabled via build config 00:02:42.197 gpudev: explicitly disabled via build config 00:02:42.197 gro: explicitly disabled via build config 00:02:42.197 gso: explicitly disabled via build config 00:02:42.197 ip_frag: explicitly disabled via build config 00:02:42.197 jobstats: explicitly disabled via build config 00:02:42.197 latencystats: explicitly disabled via build config 00:02:42.197 lpm: explicitly disabled via build config 00:02:42.197 member: explicitly disabled via build config 00:02:42.197 pcapng: explicitly disabled via build config 00:02:42.197 rawdev: explicitly disabled via build config 00:02:42.197 regexdev: explicitly disabled via build config 00:02:42.197 mldev: explicitly disabled via build config 00:02:42.197 rib: explicitly disabled via build config 00:02:42.197 sched: explicitly disabled via build config 00:02:42.197 stack: explicitly disabled via build config 00:02:42.197 ipsec: explicitly disabled via build config 00:02:42.197 pdcp: explicitly disabled via build config 00:02:42.197 fib: explicitly disabled via build config 00:02:42.197 port: explicitly disabled via build config 00:02:42.197 pdump: explicitly disabled via build config 00:02:42.197 table: explicitly disabled via build config 00:02:42.197 pipeline: explicitly disabled via build config 00:02:42.197 graph: explicitly disabled via build config 00:02:42.197 node: explicitly disabled via build config 00:02:42.197 00:02:42.197 drivers: 00:02:42.197 common/cpt: not in enabled drivers build config 00:02:42.197 common/dpaax: not in enabled drivers build config 00:02:42.197 common/iavf: not in enabled drivers build config 00:02:42.197 common/idpf: not in enabled drivers build config 00:02:42.197 common/ionic: not in enabled drivers build config 00:02:42.197 common/mvep: not in enabled drivers build config 00:02:42.197 common/octeontx: not in enabled drivers build config 00:02:42.197 bus/auxiliary: not in enabled drivers build config 00:02:42.197 bus/cdx: not in enabled drivers build config 00:02:42.197 bus/dpaa: not in enabled drivers build config 00:02:42.197 bus/fslmc: not in enabled drivers build config 00:02:42.197 bus/ifpga: not in enabled drivers build config 00:02:42.197 bus/platform: not in enabled drivers build config 00:02:42.197 bus/uacce: not in enabled drivers build config 00:02:42.197 bus/vmbus: not in enabled drivers build config 00:02:42.197 common/cnxk: not in enabled drivers build config 00:02:42.197 common/mlx5: not in enabled drivers build config 00:02:42.197 common/nfp: not in enabled drivers build config 00:02:42.197 common/nitrox: not in enabled drivers build config 00:02:42.197 common/qat: not in enabled drivers build config 00:02:42.197 common/sfc_efx: not in enabled drivers build config 00:02:42.197 mempool/bucket: not in enabled drivers build config 00:02:42.197 mempool/cnxk: not in enabled drivers build config 00:02:42.197 mempool/dpaa: not in enabled drivers build config 00:02:42.197 mempool/dpaa2: not in enabled drivers build config 00:02:42.197 mempool/octeontx: not in enabled drivers build config 00:02:42.197 mempool/stack: not in enabled drivers build config 00:02:42.197 dma/cnxk: not in enabled drivers build config 00:02:42.197 dma/dpaa: not in enabled drivers build config 00:02:42.197 dma/dpaa2: not in enabled drivers build config 00:02:42.197 dma/hisilicon: not in enabled drivers build config 00:02:42.197 dma/idxd: not in enabled drivers build config 00:02:42.197 dma/ioat: not in enabled drivers build config 00:02:42.197 dma/skeleton: not in enabled drivers build config 00:02:42.197 net/af_packet: not in enabled drivers build config 00:02:42.197 net/af_xdp: not in enabled drivers build config 00:02:42.197 net/ark: not in enabled drivers build config 00:02:42.197 net/atlantic: not in enabled drivers build config 00:02:42.197 net/avp: not in enabled drivers build config 00:02:42.197 net/axgbe: not in enabled drivers build config 00:02:42.197 net/bnx2x: not in enabled drivers build config 00:02:42.197 net/bnxt: not in enabled drivers build config 00:02:42.197 net/bonding: not in enabled drivers build config 00:02:42.197 net/cnxk: not in enabled drivers build config 00:02:42.197 net/cpfl: not in enabled drivers build config 00:02:42.197 net/cxgbe: not in enabled drivers build config 00:02:42.197 net/dpaa: not in enabled drivers build config 00:02:42.197 net/dpaa2: not in enabled drivers build config 00:02:42.197 net/e1000: not in enabled drivers build config 00:02:42.197 net/ena: not in enabled drivers build config 00:02:42.197 net/enetc: not in enabled drivers build config 00:02:42.197 net/enetfec: not in enabled drivers build config 00:02:42.197 net/enic: not in enabled drivers build config 00:02:42.197 net/failsafe: not in enabled drivers build config 00:02:42.197 net/fm10k: not in enabled drivers build config 00:02:42.197 net/gve: not in enabled drivers build config 00:02:42.197 net/hinic: not in enabled drivers build config 00:02:42.197 net/hns3: not in enabled drivers build config 00:02:42.197 net/i40e: not in enabled drivers build config 00:02:42.197 net/iavf: not in enabled drivers build config 00:02:42.197 net/ice: not in enabled drivers build config 00:02:42.197 net/idpf: not in enabled drivers build config 00:02:42.197 net/igc: not in enabled drivers build config 00:02:42.197 net/ionic: not in enabled drivers build config 00:02:42.197 net/ipn3ke: not in enabled drivers build config 00:02:42.197 net/ixgbe: not in enabled drivers build config 00:02:42.197 net/mana: not in enabled drivers build config 00:02:42.197 net/memif: not in enabled drivers build config 00:02:42.197 net/mlx4: not in enabled drivers build config 00:02:42.197 net/mlx5: not in enabled drivers build config 00:02:42.197 net/mvneta: not in enabled drivers build config 00:02:42.197 net/mvpp2: not in enabled drivers build config 00:02:42.197 net/netvsc: not in enabled drivers build config 00:02:42.197 net/nfb: not in enabled drivers build config 00:02:42.197 net/nfp: not in enabled drivers build config 00:02:42.197 net/ngbe: not in enabled drivers build config 00:02:42.197 net/null: not in enabled drivers build config 00:02:42.197 net/octeontx: not in enabled drivers build config 00:02:42.197 net/octeon_ep: not in enabled drivers build config 00:02:42.197 net/pcap: not in enabled drivers build config 00:02:42.197 net/pfe: not in enabled drivers build config 00:02:42.197 net/qede: not in enabled drivers build config 00:02:42.197 net/ring: not in enabled drivers build config 00:02:42.197 net/sfc: not in enabled drivers build config 00:02:42.197 net/softnic: not in enabled drivers build config 00:02:42.197 net/tap: not in enabled drivers build config 00:02:42.197 net/thunderx: not in enabled drivers build config 00:02:42.197 net/txgbe: not in enabled drivers build config 00:02:42.197 net/vdev_netvsc: not in enabled drivers build config 00:02:42.197 net/vhost: not in enabled drivers build config 00:02:42.197 net/virtio: not in enabled drivers build config 00:02:42.197 net/vmxnet3: not in enabled drivers build config 00:02:42.197 raw/*: missing internal dependency, "rawdev" 00:02:42.197 crypto/armv8: not in enabled drivers build config 00:02:42.197 crypto/bcmfs: not in enabled drivers build config 00:02:42.197 crypto/caam_jr: not in enabled drivers build config 00:02:42.197 crypto/ccp: not in enabled drivers build config 00:02:42.197 crypto/cnxk: not in enabled drivers build config 00:02:42.197 crypto/dpaa_sec: not in enabled drivers build config 00:02:42.198 crypto/dpaa2_sec: not in enabled drivers build config 00:02:42.198 crypto/ipsec_mb: not in enabled drivers build config 00:02:42.198 crypto/mlx5: not in enabled drivers build config 00:02:42.198 crypto/mvsam: not in enabled drivers build config 00:02:42.198 crypto/nitrox: not in enabled drivers build config 00:02:42.198 crypto/null: not in enabled drivers build config 00:02:42.198 crypto/octeontx: not in enabled drivers build config 00:02:42.198 crypto/openssl: not in enabled drivers build config 00:02:42.198 crypto/scheduler: not in enabled drivers build config 00:02:42.198 crypto/uadk: not in enabled drivers build config 00:02:42.198 crypto/virtio: not in enabled drivers build config 00:02:42.198 compress/isal: not in enabled drivers build config 00:02:42.198 compress/mlx5: not in enabled drivers build config 00:02:42.198 compress/nitrox: not in enabled drivers build config 00:02:42.198 compress/octeontx: not in enabled drivers build config 00:02:42.198 compress/zlib: not in enabled drivers build config 00:02:42.198 regex/*: missing internal dependency, "regexdev" 00:02:42.198 ml/*: missing internal dependency, "mldev" 00:02:42.198 vdpa/ifc: not in enabled drivers build config 00:02:42.198 vdpa/mlx5: not in enabled drivers build config 00:02:42.198 vdpa/nfp: not in enabled drivers build config 00:02:42.198 vdpa/sfc: not in enabled drivers build config 00:02:42.198 event/*: missing internal dependency, "eventdev" 00:02:42.198 baseband/*: missing internal dependency, "bbdev" 00:02:42.198 gpu/*: missing internal dependency, "gpudev" 00:02:42.198 00:02:42.198 00:02:42.198 Build targets in project: 84 00:02:42.198 00:02:42.198 DPDK 24.03.0 00:02:42.198 00:02:42.198 User defined options 00:02:42.198 buildtype : debug 00:02:42.198 default_library : shared 00:02:42.198 libdir : lib 00:02:42.198 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.198 b_sanitize : address 00:02:42.198 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:42.198 c_link_args : 00:02:42.198 cpu_instruction_set: native 00:02:42.198 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:42.198 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:42.198 enable_docs : false 00:02:42.198 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:42.198 enable_kmods : false 00:02:42.198 max_lcores : 128 00:02:42.198 tests : false 00:02:42.198 00:02:42.198 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.765 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.765 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:42.765 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.765 [3/267] Linking static target lib/librte_kvargs.a 00:02:42.765 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:42.765 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.765 [6/267] Linking static target lib/librte_log.a 00:02:43.023 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.023 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.023 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.023 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:43.280 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.280 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.280 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.280 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.280 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:43.281 [16/267] Linking static target lib/librte_telemetry.a 00:02:43.281 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:43.281 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.538 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.538 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.538 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.538 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.538 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.797 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.797 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.797 [26/267] Linking target lib/librte_log.so.24.1 00:02:43.797 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.797 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.797 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.797 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.797 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:44.055 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:44.055 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.055 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.055 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.055 [36/267] Linking target lib/librte_telemetry.so.24.1 00:02:44.055 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:44.055 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.055 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.055 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.313 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.313 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.313 [43/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:44.313 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.313 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.313 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.313 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.313 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.571 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.571 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.571 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.571 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.829 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.829 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.829 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.829 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.829 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.829 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.829 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.829 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.829 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.829 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.086 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.086 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.086 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.086 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.086 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.344 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.344 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.344 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.344 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:45.344 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.344 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:45.344 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.344 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:45.602 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.602 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.602 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.602 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.602 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.859 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.859 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.859 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.859 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.859 [85/267] Linking static target lib/librte_eal.a 00:02:45.859 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.859 [87/267] Linking static target lib/librte_ring.a 00:02:46.118 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:46.118 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:46.118 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.118 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.118 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.118 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:46.118 [94/267] Linking static target lib/librte_mempool.a 00:02:46.377 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:46.377 [96/267] Linking static target lib/librte_rcu.a 00:02:46.377 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.377 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.377 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.377 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.377 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.650 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.650 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.650 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:46.650 [105/267] Linking static target lib/librte_net.a 00:02:46.650 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.960 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.960 [108/267] Linking static target lib/librte_meter.a 00:02:46.960 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:46.960 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.960 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:46.960 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.960 [113/267] Linking static target lib/librte_mbuf.a 00:02:46.960 [114/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.960 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.219 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.219 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.219 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.219 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.219 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:47.477 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.477 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.477 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.735 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.735 [125/267] Linking static target lib/librte_pci.a 00:02:47.735 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:47.735 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.735 [128/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.735 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:47.735 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:47.735 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:47.735 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:47.994 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.994 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.994 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:47.994 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:47.994 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:47.994 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:47.994 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:47.994 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:47.994 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:47.994 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.994 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:47.994 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.252 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.252 [146/267] Linking static target lib/librte_cmdline.a 00:02:48.252 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.252 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:48.511 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.511 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.511 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:48.511 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.511 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.511 [154/267] Linking static target lib/librte_timer.a 00:02:48.769 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.769 [156/267] Linking static target lib/librte_compressdev.a 00:02:48.769 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.769 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.769 [159/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:49.026 [160/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.026 [161/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.026 [162/267] Linking static target lib/librte_dmadev.a 00:02:49.026 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.026 [164/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.026 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.026 [166/267] Linking static target lib/librte_ethdev.a 00:02:49.284 [167/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.284 [168/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:49.284 [169/267] Linking static target lib/librte_hash.a 00:02:49.284 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.284 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.284 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.543 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.543 [174/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.543 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.543 [176/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.543 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.543 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.802 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.802 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.802 [181/267] Linking static target lib/librte_cryptodev.a 00:02:49.802 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.802 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.802 [184/267] Linking static target lib/librte_power.a 00:02:50.061 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.061 [186/267] Linking static target lib/librte_reorder.a 00:02:50.061 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.061 [188/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.061 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.061 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.061 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.061 [192/267] Linking static target lib/librte_security.a 00:02:50.319 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.578 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.836 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.836 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.836 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.836 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.836 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:50.836 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.094 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.094 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.094 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.094 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.352 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.353 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.353 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.353 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.353 [209/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.353 [210/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.611 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.611 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.611 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.611 [214/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.611 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.611 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.611 [217/267] Linking static target drivers/librte_bus_vdev.a 00:02:51.611 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:51.611 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.611 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.869 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.869 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.869 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.869 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:51.869 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.127 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.386 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.349 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.349 [229/267] Linking target lib/librte_eal.so.24.1 00:02:53.349 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.349 [231/267] Linking target lib/librte_pci.so.24.1 00:02:53.349 [232/267] Linking target lib/librte_ring.so.24.1 00:02:53.349 [233/267] Linking target lib/librte_timer.so.24.1 00:02:53.349 [234/267] Linking target lib/librte_meter.so.24.1 00:02:53.349 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:53.349 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.349 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.349 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.349 [239/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.349 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.607 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.607 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.607 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:53.607 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:53.607 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.607 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.607 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:53.607 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.865 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.865 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:53.865 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:53.865 [252/267] Linking target lib/librte_net.so.24.1 00:02:53.865 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:53.865 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.865 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.865 [256/267] Linking target lib/librte_hash.so.24.1 00:02:53.865 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:53.865 [258/267] Linking target lib/librte_security.so.24.1 00:02:54.123 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:54.380 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.380 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:54.638 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:54.638 [263/267] Linking target lib/librte_power.so.24.1 00:02:54.895 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.896 [265/267] Linking static target lib/librte_vhost.a 00:02:56.269 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.269 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:56.269 INFO: autodetecting backend as ninja 00:02:56.269 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.139 CC lib/ut_mock/mock.o 00:03:11.139 CC lib/ut/ut.o 00:03:11.139 CC lib/log/log.o 00:03:11.139 CC lib/log/log_deprecated.o 00:03:11.139 CC lib/log/log_flags.o 00:03:11.139 LIB libspdk_ut.a 00:03:11.139 LIB libspdk_ut_mock.a 00:03:11.139 SO libspdk_ut.so.2.0 00:03:11.139 SO libspdk_ut_mock.so.6.0 00:03:11.139 LIB libspdk_log.a 00:03:11.139 SO libspdk_log.so.7.1 00:03:11.139 SYMLINK libspdk_ut.so 00:03:11.139 SYMLINK libspdk_ut_mock.so 00:03:11.139 SYMLINK libspdk_log.so 00:03:11.139 CC lib/ioat/ioat.o 00:03:11.139 CXX lib/trace_parser/trace.o 00:03:11.139 CC lib/util/base64.o 00:03:11.139 CC lib/dma/dma.o 00:03:11.139 CC lib/util/cpuset.o 00:03:11.139 CC lib/util/crc16.o 00:03:11.139 CC lib/util/crc32.o 00:03:11.139 CC lib/util/crc32c.o 00:03:11.139 CC lib/util/bit_array.o 00:03:11.139 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.139 CC lib/util/crc32_ieee.o 00:03:11.139 CC lib/util/crc64.o 00:03:11.139 CC lib/util/dif.o 00:03:11.139 CC lib/util/fd.o 00:03:11.139 LIB libspdk_dma.a 00:03:11.139 CC lib/util/fd_group.o 00:03:11.139 CC lib/util/file.o 00:03:11.139 CC lib/vfio_user/host/vfio_user.o 00:03:11.139 SO libspdk_dma.so.5.0 00:03:11.139 CC lib/util/hexlify.o 00:03:11.139 SYMLINK libspdk_dma.so 00:03:11.139 LIB libspdk_ioat.a 00:03:11.139 CC lib/util/iov.o 00:03:11.139 CC lib/util/math.o 00:03:11.139 CC lib/util/net.o 00:03:11.139 SO libspdk_ioat.so.7.0 00:03:11.139 CC lib/util/pipe.o 00:03:11.139 SYMLINK libspdk_ioat.so 00:03:11.139 CC lib/util/strerror_tls.o 00:03:11.139 CC lib/util/string.o 00:03:11.139 LIB libspdk_vfio_user.a 00:03:11.139 CC lib/util/uuid.o 00:03:11.139 CC lib/util/xor.o 00:03:11.139 SO libspdk_vfio_user.so.5.0 00:03:11.139 CC lib/util/zipf.o 00:03:11.139 CC lib/util/md5.o 00:03:11.139 SYMLINK libspdk_vfio_user.so 00:03:11.139 LIB libspdk_util.a 00:03:11.139 SO libspdk_util.so.10.1 00:03:11.139 LIB libspdk_trace_parser.a 00:03:11.139 SO libspdk_trace_parser.so.6.0 00:03:11.139 SYMLINK libspdk_util.so 00:03:11.139 SYMLINK libspdk_trace_parser.so 00:03:11.139 CC lib/env_dpdk/env.o 00:03:11.139 CC lib/rdma_provider/common.o 00:03:11.139 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:11.139 CC lib/conf/conf.o 00:03:11.139 CC lib/env_dpdk/memory.o 00:03:11.139 CC lib/env_dpdk/pci.o 00:03:11.139 CC lib/rdma_utils/rdma_utils.o 00:03:11.139 CC lib/idxd/idxd.o 00:03:11.139 CC lib/vmd/vmd.o 00:03:11.139 CC lib/json/json_parse.o 00:03:11.139 CC lib/json/json_util.o 00:03:11.139 LIB libspdk_conf.a 00:03:11.139 LIB libspdk_rdma_provider.a 00:03:11.139 SO libspdk_conf.so.6.0 00:03:11.139 SO libspdk_rdma_provider.so.6.0 00:03:11.139 SYMLINK libspdk_conf.so 00:03:11.139 CC lib/json/json_write.o 00:03:11.139 CC lib/env_dpdk/init.o 00:03:11.397 LIB libspdk_rdma_utils.a 00:03:11.397 SYMLINK libspdk_rdma_provider.so 00:03:11.397 CC lib/env_dpdk/threads.o 00:03:11.397 SO libspdk_rdma_utils.so.1.0 00:03:11.397 CC lib/env_dpdk/pci_ioat.o 00:03:11.397 SYMLINK libspdk_rdma_utils.so 00:03:11.397 CC lib/vmd/led.o 00:03:11.397 CC lib/env_dpdk/pci_virtio.o 00:03:11.397 CC lib/env_dpdk/pci_vmd.o 00:03:11.397 CC lib/env_dpdk/pci_idxd.o 00:03:11.397 LIB libspdk_json.a 00:03:11.397 CC lib/env_dpdk/pci_event.o 00:03:11.397 CC lib/idxd/idxd_user.o 00:03:11.397 SO libspdk_json.so.6.0 00:03:11.397 CC lib/idxd/idxd_kernel.o 00:03:11.656 CC lib/env_dpdk/sigbus_handler.o 00:03:11.656 SYMLINK libspdk_json.so 00:03:11.656 CC lib/env_dpdk/pci_dpdk.o 00:03:11.656 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:11.656 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:11.656 LIB libspdk_vmd.a 00:03:11.656 SO libspdk_vmd.so.6.0 00:03:11.656 LIB libspdk_idxd.a 00:03:11.656 SO libspdk_idxd.so.12.1 00:03:11.656 SYMLINK libspdk_vmd.so 00:03:11.914 CC lib/jsonrpc/jsonrpc_server.o 00:03:11.914 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:11.914 CC lib/jsonrpc/jsonrpc_client.o 00:03:11.914 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:11.914 SYMLINK libspdk_idxd.so 00:03:12.172 LIB libspdk_jsonrpc.a 00:03:12.172 SO libspdk_jsonrpc.so.6.0 00:03:12.172 SYMLINK libspdk_jsonrpc.so 00:03:12.430 CC lib/rpc/rpc.o 00:03:12.430 LIB libspdk_env_dpdk.a 00:03:12.430 SO libspdk_env_dpdk.so.15.1 00:03:12.687 LIB libspdk_rpc.a 00:03:12.687 SO libspdk_rpc.so.6.0 00:03:12.687 SYMLINK libspdk_rpc.so 00:03:12.687 SYMLINK libspdk_env_dpdk.so 00:03:12.945 CC lib/notify/notify.o 00:03:12.945 CC lib/notify/notify_rpc.o 00:03:12.945 CC lib/keyring/keyring.o 00:03:12.945 CC lib/keyring/keyring_rpc.o 00:03:12.945 CC lib/trace/trace_rpc.o 00:03:12.945 CC lib/trace/trace.o 00:03:12.945 CC lib/trace/trace_flags.o 00:03:12.945 LIB libspdk_notify.a 00:03:12.945 SO libspdk_notify.so.6.0 00:03:12.945 SYMLINK libspdk_notify.so 00:03:12.945 LIB libspdk_keyring.a 00:03:12.945 LIB libspdk_trace.a 00:03:13.203 SO libspdk_keyring.so.2.0 00:03:13.203 SO libspdk_trace.so.11.0 00:03:13.203 SYMLINK libspdk_keyring.so 00:03:13.203 SYMLINK libspdk_trace.so 00:03:13.203 CC lib/sock/sock.o 00:03:13.203 CC lib/sock/sock_rpc.o 00:03:13.479 CC lib/thread/iobuf.o 00:03:13.479 CC lib/thread/thread.o 00:03:13.757 LIB libspdk_sock.a 00:03:13.757 SO libspdk_sock.so.10.0 00:03:13.757 SYMLINK libspdk_sock.so 00:03:14.015 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:14.015 CC lib/nvme/nvme_fabric.o 00:03:14.015 CC lib/nvme/nvme_ctrlr.o 00:03:14.015 CC lib/nvme/nvme_ns.o 00:03:14.015 CC lib/nvme/nvme_ns_cmd.o 00:03:14.015 CC lib/nvme/nvme_qpair.o 00:03:14.015 CC lib/nvme/nvme.o 00:03:14.015 CC lib/nvme/nvme_pcie_common.o 00:03:14.015 CC lib/nvme/nvme_pcie.o 00:03:14.580 CC lib/nvme/nvme_quirks.o 00:03:14.580 CC lib/nvme/nvme_transport.o 00:03:14.580 CC lib/nvme/nvme_discovery.o 00:03:14.838 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:14.838 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.838 CC lib/nvme/nvme_tcp.o 00:03:14.838 CC lib/nvme/nvme_opal.o 00:03:14.838 LIB libspdk_thread.a 00:03:14.838 SO libspdk_thread.so.11.0 00:03:15.096 SYMLINK libspdk_thread.so 00:03:15.096 CC lib/nvme/nvme_io_msg.o 00:03:15.096 CC lib/nvme/nvme_poll_group.o 00:03:15.096 CC lib/nvme/nvme_zns.o 00:03:15.096 CC lib/nvme/nvme_stubs.o 00:03:15.096 CC lib/nvme/nvme_auth.o 00:03:15.096 CC lib/nvme/nvme_cuse.o 00:03:15.354 CC lib/nvme/nvme_rdma.o 00:03:15.611 CC lib/accel/accel.o 00:03:15.612 CC lib/blob/blobstore.o 00:03:15.612 CC lib/virtio/virtio.o 00:03:15.612 CC lib/init/json_config.o 00:03:15.869 CC lib/fsdev/fsdev.o 00:03:15.869 CC lib/init/subsystem.o 00:03:16.127 CC lib/virtio/virtio_vhost_user.o 00:03:16.127 CC lib/init/subsystem_rpc.o 00:03:16.127 CC lib/init/rpc.o 00:03:16.127 CC lib/blob/request.o 00:03:16.127 CC lib/blob/zeroes.o 00:03:16.127 LIB libspdk_init.a 00:03:16.127 SO libspdk_init.so.6.0 00:03:16.385 CC lib/blob/blob_bs_dev.o 00:03:16.385 SYMLINK libspdk_init.so 00:03:16.385 CC lib/accel/accel_rpc.o 00:03:16.385 CC lib/virtio/virtio_vfio_user.o 00:03:16.385 CC lib/virtio/virtio_pci.o 00:03:16.385 CC lib/fsdev/fsdev_io.o 00:03:16.385 CC lib/fsdev/fsdev_rpc.o 00:03:16.385 CC lib/event/app.o 00:03:16.385 CC lib/accel/accel_sw.o 00:03:16.385 CC lib/event/reactor.o 00:03:16.385 CC lib/event/log_rpc.o 00:03:16.642 CC lib/event/app_rpc.o 00:03:16.642 LIB libspdk_virtio.a 00:03:16.642 CC lib/event/scheduler_static.o 00:03:16.642 SO libspdk_virtio.so.7.0 00:03:16.642 LIB libspdk_nvme.a 00:03:16.642 SYMLINK libspdk_virtio.so 00:03:16.642 LIB libspdk_fsdev.a 00:03:16.900 LIB libspdk_accel.a 00:03:16.900 SO libspdk_accel.so.16.0 00:03:16.900 SO libspdk_fsdev.so.2.0 00:03:16.900 SO libspdk_nvme.so.15.0 00:03:16.900 SYMLINK libspdk_fsdev.so 00:03:16.900 SYMLINK libspdk_accel.so 00:03:16.900 LIB libspdk_event.a 00:03:16.900 SO libspdk_event.so.14.0 00:03:17.158 SYMLINK libspdk_event.so 00:03:17.158 SYMLINK libspdk_nvme.so 00:03:17.158 CC lib/bdev/bdev.o 00:03:17.158 CC lib/bdev/part.o 00:03:17.158 CC lib/bdev/bdev_rpc.o 00:03:17.158 CC lib/bdev/bdev_zone.o 00:03:17.158 CC lib/bdev/scsi_nvme.o 00:03:17.158 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:17.723 LIB libspdk_fuse_dispatcher.a 00:03:17.723 SO libspdk_fuse_dispatcher.so.1.0 00:03:17.723 SYMLINK libspdk_fuse_dispatcher.so 00:03:19.096 LIB libspdk_blob.a 00:03:19.096 SO libspdk_blob.so.11.0 00:03:19.096 SYMLINK libspdk_blob.so 00:03:19.096 CC lib/blobfs/tree.o 00:03:19.096 CC lib/blobfs/blobfs.o 00:03:19.355 CC lib/lvol/lvol.o 00:03:19.968 LIB libspdk_bdev.a 00:03:19.968 SO libspdk_bdev.so.17.0 00:03:19.968 LIB libspdk_blobfs.a 00:03:19.968 SO libspdk_blobfs.so.10.0 00:03:19.968 SYMLINK libspdk_bdev.so 00:03:19.968 SYMLINK libspdk_blobfs.so 00:03:20.247 CC lib/ublk/ublk.o 00:03:20.248 CC lib/ublk/ublk_rpc.o 00:03:20.248 CC lib/scsi/dev.o 00:03:20.248 CC lib/scsi/port.o 00:03:20.248 CC lib/scsi/lun.o 00:03:20.248 CC lib/scsi/scsi.o 00:03:20.248 CC lib/ftl/ftl_core.o 00:03:20.248 CC lib/nbd/nbd.o 00:03:20.248 CC lib/nvmf/ctrlr.o 00:03:20.248 LIB libspdk_lvol.a 00:03:20.248 SO libspdk_lvol.so.10.0 00:03:20.248 CC lib/scsi/scsi_bdev.o 00:03:20.248 CC lib/nbd/nbd_rpc.o 00:03:20.248 SYMLINK libspdk_lvol.so 00:03:20.248 CC lib/ftl/ftl_init.o 00:03:20.248 CC lib/ftl/ftl_layout.o 00:03:20.248 CC lib/scsi/scsi_pr.o 00:03:20.507 CC lib/scsi/scsi_rpc.o 00:03:20.507 CC lib/scsi/task.o 00:03:20.507 CC lib/ftl/ftl_debug.o 00:03:20.507 CC lib/ftl/ftl_io.o 00:03:20.507 CC lib/ftl/ftl_sb.o 00:03:20.507 LIB libspdk_nbd.a 00:03:20.507 SO libspdk_nbd.so.7.0 00:03:20.507 CC lib/ftl/ftl_l2p.o 00:03:20.507 SYMLINK libspdk_nbd.so 00:03:20.507 CC lib/ftl/ftl_l2p_flat.o 00:03:20.507 CC lib/ftl/ftl_nv_cache.o 00:03:20.507 CC lib/ftl/ftl_band.o 00:03:20.764 LIB libspdk_scsi.a 00:03:20.764 CC lib/ftl/ftl_band_ops.o 00:03:20.764 CC lib/ftl/ftl_writer.o 00:03:20.764 SO libspdk_scsi.so.9.0 00:03:20.764 CC lib/ftl/ftl_rq.o 00:03:20.764 LIB libspdk_ublk.a 00:03:20.764 CC lib/ftl/ftl_reloc.o 00:03:20.764 SYMLINK libspdk_scsi.so 00:03:20.764 CC lib/ftl/ftl_l2p_cache.o 00:03:20.764 SO libspdk_ublk.so.3.0 00:03:20.764 CC lib/ftl/ftl_p2l.o 00:03:20.764 SYMLINK libspdk_ublk.so 00:03:20.764 CC lib/ftl/ftl_p2l_log.o 00:03:21.022 CC lib/ftl/mngt/ftl_mngt.o 00:03:21.022 CC lib/iscsi/conn.o 00:03:21.022 CC lib/iscsi/init_grp.o 00:03:21.022 CC lib/iscsi/iscsi.o 00:03:21.022 CC lib/iscsi/param.o 00:03:21.280 CC lib/iscsi/portal_grp.o 00:03:21.280 CC lib/iscsi/tgt_node.o 00:03:21.280 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:21.280 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:21.280 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:21.538 CC lib/iscsi/iscsi_subsystem.o 00:03:21.538 CC lib/iscsi/iscsi_rpc.o 00:03:21.538 CC lib/iscsi/task.o 00:03:21.538 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:21.538 CC lib/vhost/vhost.o 00:03:21.538 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:21.538 CC lib/vhost/vhost_rpc.o 00:03:21.538 CC lib/nvmf/ctrlr_discovery.o 00:03:21.538 CC lib/nvmf/ctrlr_bdev.o 00:03:21.538 CC lib/nvmf/subsystem.o 00:03:21.796 CC lib/nvmf/nvmf.o 00:03:21.796 CC lib/nvmf/nvmf_rpc.o 00:03:21.796 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:21.796 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:22.054 CC lib/vhost/vhost_scsi.o 00:03:22.054 CC lib/vhost/vhost_blk.o 00:03:22.054 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:22.054 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:22.054 CC lib/vhost/rte_vhost_user.o 00:03:22.312 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:22.313 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:22.313 CC lib/nvmf/transport.o 00:03:22.313 LIB libspdk_iscsi.a 00:03:22.313 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:22.313 CC lib/nvmf/tcp.o 00:03:22.572 SO libspdk_iscsi.so.8.0 00:03:22.572 CC lib/nvmf/stubs.o 00:03:22.572 CC lib/nvmf/mdns_server.o 00:03:22.572 SYMLINK libspdk_iscsi.so 00:03:22.572 CC lib/ftl/utils/ftl_conf.o 00:03:22.572 CC lib/ftl/utils/ftl_md.o 00:03:22.572 CC lib/ftl/utils/ftl_mempool.o 00:03:22.830 CC lib/ftl/utils/ftl_bitmap.o 00:03:22.830 CC lib/ftl/utils/ftl_property.o 00:03:22.830 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:22.830 CC lib/nvmf/rdma.o 00:03:22.830 CC lib/nvmf/auth.o 00:03:23.088 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.088 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:23.088 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:23.088 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:23.088 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:23.088 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:23.088 LIB libspdk_vhost.a 00:03:23.088 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:23.088 SO libspdk_vhost.so.8.0 00:03:23.088 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:23.088 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.088 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.088 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:23.088 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:23.346 SYMLINK libspdk_vhost.so 00:03:23.346 CC lib/ftl/base/ftl_base_dev.o 00:03:23.346 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.346 CC lib/ftl/ftl_trace.o 00:03:23.603 LIB libspdk_ftl.a 00:03:23.603 SO libspdk_ftl.so.9.0 00:03:23.862 SYMLINK libspdk_ftl.so 00:03:24.798 LIB libspdk_nvmf.a 00:03:24.798 SO libspdk_nvmf.so.20.0 00:03:25.058 SYMLINK libspdk_nvmf.so 00:03:25.316 CC module/env_dpdk/env_dpdk_rpc.o 00:03:25.316 CC module/accel/ioat/accel_ioat.o 00:03:25.316 CC module/accel/dsa/accel_dsa.o 00:03:25.316 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:25.316 CC module/fsdev/aio/fsdev_aio.o 00:03:25.316 CC module/accel/iaa/accel_iaa.o 00:03:25.316 CC module/blob/bdev/blob_bdev.o 00:03:25.316 CC module/sock/posix/posix.o 00:03:25.316 CC module/accel/error/accel_error.o 00:03:25.316 CC module/keyring/file/keyring.o 00:03:25.316 LIB libspdk_env_dpdk_rpc.a 00:03:25.316 SO libspdk_env_dpdk_rpc.so.6.0 00:03:25.316 SYMLINK libspdk_env_dpdk_rpc.so 00:03:25.316 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:25.316 CC module/keyring/file/keyring_rpc.o 00:03:25.316 LIB libspdk_scheduler_dynamic.a 00:03:25.316 CC module/accel/error/accel_error_rpc.o 00:03:25.316 CC module/accel/ioat/accel_ioat_rpc.o 00:03:25.575 SO libspdk_scheduler_dynamic.so.4.0 00:03:25.575 CC module/accel/dsa/accel_dsa_rpc.o 00:03:25.575 CC module/accel/iaa/accel_iaa_rpc.o 00:03:25.575 SYMLINK libspdk_scheduler_dynamic.so 00:03:25.575 LIB libspdk_blob_bdev.a 00:03:25.575 LIB libspdk_keyring_file.a 00:03:25.575 LIB libspdk_accel_error.a 00:03:25.575 SO libspdk_blob_bdev.so.11.0 00:03:25.575 LIB libspdk_accel_ioat.a 00:03:25.575 SO libspdk_keyring_file.so.2.0 00:03:25.575 SO libspdk_accel_error.so.2.0 00:03:25.575 LIB libspdk_accel_dsa.a 00:03:25.575 SO libspdk_accel_ioat.so.6.0 00:03:25.575 SO libspdk_accel_dsa.so.5.0 00:03:25.575 SYMLINK libspdk_blob_bdev.so 00:03:25.575 SYMLINK libspdk_keyring_file.so 00:03:25.575 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:25.575 CC module/fsdev/aio/linux_aio_mgr.o 00:03:25.575 LIB libspdk_accel_iaa.a 00:03:25.575 SYMLINK libspdk_accel_error.so 00:03:25.575 SYMLINK libspdk_accel_ioat.so 00:03:25.575 CC module/scheduler/gscheduler/gscheduler.o 00:03:25.575 SO libspdk_accel_iaa.so.3.0 00:03:25.575 SYMLINK libspdk_accel_dsa.so 00:03:25.575 SYMLINK libspdk_accel_iaa.so 00:03:25.833 CC module/keyring/linux/keyring.o 00:03:25.833 LIB libspdk_scheduler_dpdk_governor.a 00:03:25.833 CC module/keyring/linux/keyring_rpc.o 00:03:25.833 LIB libspdk_scheduler_gscheduler.a 00:03:25.833 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:25.833 SO libspdk_scheduler_gscheduler.so.4.0 00:03:25.833 CC module/bdev/gpt/gpt.o 00:03:25.833 CC module/bdev/error/vbdev_error.o 00:03:25.833 CC module/blobfs/bdev/blobfs_bdev.o 00:03:25.833 CC module/bdev/delay/vbdev_delay.o 00:03:25.833 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:25.833 CC module/bdev/error/vbdev_error_rpc.o 00:03:25.833 SYMLINK libspdk_scheduler_gscheduler.so 00:03:25.833 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:25.833 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:25.833 LIB libspdk_keyring_linux.a 00:03:25.833 SO libspdk_keyring_linux.so.1.0 00:03:25.833 LIB libspdk_fsdev_aio.a 00:03:25.833 SO libspdk_fsdev_aio.so.1.0 00:03:26.092 SYMLINK libspdk_keyring_linux.so 00:03:26.092 CC module/bdev/gpt/vbdev_gpt.o 00:03:26.092 LIB libspdk_blobfs_bdev.a 00:03:26.092 SYMLINK libspdk_fsdev_aio.so 00:03:26.092 SO libspdk_blobfs_bdev.so.6.0 00:03:26.092 LIB libspdk_bdev_error.a 00:03:26.092 LIB libspdk_sock_posix.a 00:03:26.092 SO libspdk_bdev_error.so.6.0 00:03:26.092 SO libspdk_sock_posix.so.6.0 00:03:26.092 SYMLINK libspdk_blobfs_bdev.so 00:03:26.092 CC module/bdev/lvol/vbdev_lvol.o 00:03:26.092 SYMLINK libspdk_bdev_error.so 00:03:26.092 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:26.092 CC module/bdev/malloc/bdev_malloc.o 00:03:26.092 CC module/bdev/nvme/bdev_nvme.o 00:03:26.092 CC module/bdev/null/bdev_null.o 00:03:26.092 SYMLINK libspdk_sock_posix.so 00:03:26.092 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:26.092 CC module/bdev/passthru/vbdev_passthru.o 00:03:26.092 LIB libspdk_bdev_delay.a 00:03:26.092 SO libspdk_bdev_delay.so.6.0 00:03:26.092 CC module/bdev/raid/bdev_raid.o 00:03:26.351 LIB libspdk_bdev_gpt.a 00:03:26.351 SO libspdk_bdev_gpt.so.6.0 00:03:26.351 CC module/bdev/null/bdev_null_rpc.o 00:03:26.351 SYMLINK libspdk_bdev_delay.so 00:03:26.351 CC module/bdev/raid/bdev_raid_rpc.o 00:03:26.351 SYMLINK libspdk_bdev_gpt.so 00:03:26.351 LIB libspdk_bdev_null.a 00:03:26.351 SO libspdk_bdev_null.so.6.0 00:03:26.351 CC module/bdev/split/vbdev_split.o 00:03:26.351 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:26.351 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:26.351 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:26.351 SYMLINK libspdk_bdev_null.so 00:03:26.610 CC module/bdev/split/vbdev_split_rpc.o 00:03:26.610 LIB libspdk_bdev_malloc.a 00:03:26.610 SO libspdk_bdev_malloc.so.6.0 00:03:26.610 LIB libspdk_bdev_passthru.a 00:03:26.610 SYMLINK libspdk_bdev_malloc.so 00:03:26.610 LIB libspdk_bdev_lvol.a 00:03:26.610 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:26.610 CC module/bdev/nvme/nvme_rpc.o 00:03:26.610 CC module/bdev/xnvme/bdev_xnvme.o 00:03:26.610 SO libspdk_bdev_passthru.so.6.0 00:03:26.610 SO libspdk_bdev_lvol.so.6.0 00:03:26.610 LIB libspdk_bdev_split.a 00:03:26.610 SYMLINK libspdk_bdev_passthru.so 00:03:26.610 CC module/bdev/nvme/bdev_mdns_client.o 00:03:26.610 SO libspdk_bdev_split.so.6.0 00:03:26.610 SYMLINK libspdk_bdev_lvol.so 00:03:26.610 CC module/bdev/nvme/vbdev_opal.o 00:03:26.610 CC module/bdev/aio/bdev_aio.o 00:03:26.610 SYMLINK libspdk_bdev_split.so 00:03:26.610 CC module/bdev/aio/bdev_aio_rpc.o 00:03:26.869 LIB libspdk_bdev_zone_block.a 00:03:26.869 SO libspdk_bdev_zone_block.so.6.0 00:03:26.869 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:26.869 CC module/bdev/raid/bdev_raid_sb.o 00:03:26.869 CC module/bdev/raid/raid0.o 00:03:26.869 SYMLINK libspdk_bdev_zone_block.so 00:03:26.869 CC module/bdev/raid/raid1.o 00:03:26.869 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:26.869 LIB libspdk_bdev_xnvme.a 00:03:26.869 SO libspdk_bdev_xnvme.so.3.0 00:03:26.869 CC module/bdev/ftl/bdev_ftl.o 00:03:27.126 SYMLINK libspdk_bdev_xnvme.so 00:03:27.126 LIB libspdk_bdev_aio.a 00:03:27.126 SO libspdk_bdev_aio.so.6.0 00:03:27.126 CC module/bdev/raid/concat.o 00:03:27.126 SYMLINK libspdk_bdev_aio.so 00:03:27.126 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:27.126 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:27.126 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:27.126 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:27.126 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:27.126 CC module/bdev/iscsi/bdev_iscsi.o 00:03:27.126 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:27.126 LIB libspdk_bdev_raid.a 00:03:27.384 LIB libspdk_bdev_ftl.a 00:03:27.384 SO libspdk_bdev_raid.so.6.0 00:03:27.384 SO libspdk_bdev_ftl.so.6.0 00:03:27.384 SYMLINK libspdk_bdev_ftl.so 00:03:27.384 SYMLINK libspdk_bdev_raid.so 00:03:27.643 LIB libspdk_bdev_iscsi.a 00:03:27.643 SO libspdk_bdev_iscsi.so.6.0 00:03:27.643 SYMLINK libspdk_bdev_iscsi.so 00:03:27.643 LIB libspdk_bdev_virtio.a 00:03:27.643 SO libspdk_bdev_virtio.so.6.0 00:03:27.903 SYMLINK libspdk_bdev_virtio.so 00:03:28.161 LIB libspdk_bdev_nvme.a 00:03:28.418 SO libspdk_bdev_nvme.so.7.1 00:03:28.418 SYMLINK libspdk_bdev_nvme.so 00:03:28.676 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.676 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:28.676 CC module/event/subsystems/keyring/keyring.o 00:03:28.676 CC module/event/subsystems/vmd/vmd.o 00:03:28.676 CC module/event/subsystems/sock/sock.o 00:03:28.676 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.676 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.676 CC module/event/subsystems/fsdev/fsdev.o 00:03:28.676 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.935 LIB libspdk_event_sock.a 00:03:28.935 LIB libspdk_event_keyring.a 00:03:28.935 LIB libspdk_event_scheduler.a 00:03:28.935 LIB libspdk_event_fsdev.a 00:03:28.935 LIB libspdk_event_vhost_blk.a 00:03:28.935 SO libspdk_event_sock.so.5.0 00:03:28.935 SO libspdk_event_keyring.so.1.0 00:03:28.936 LIB libspdk_event_vmd.a 00:03:28.936 LIB libspdk_event_iobuf.a 00:03:28.936 SO libspdk_event_scheduler.so.4.0 00:03:28.936 SO libspdk_event_fsdev.so.1.0 00:03:28.936 SO libspdk_event_vhost_blk.so.3.0 00:03:28.936 SO libspdk_event_vmd.so.6.0 00:03:28.936 SO libspdk_event_iobuf.so.3.0 00:03:28.936 SYMLINK libspdk_event_keyring.so 00:03:28.936 SYMLINK libspdk_event_sock.so 00:03:28.936 SYMLINK libspdk_event_scheduler.so 00:03:28.936 SYMLINK libspdk_event_fsdev.so 00:03:28.936 SYMLINK libspdk_event_vhost_blk.so 00:03:28.936 SYMLINK libspdk_event_vmd.so 00:03:28.936 SYMLINK libspdk_event_iobuf.so 00:03:29.211 CC module/event/subsystems/accel/accel.o 00:03:29.211 LIB libspdk_event_accel.a 00:03:29.498 SO libspdk_event_accel.so.6.0 00:03:29.498 SYMLINK libspdk_event_accel.so 00:03:29.758 CC module/event/subsystems/bdev/bdev.o 00:03:29.758 LIB libspdk_event_bdev.a 00:03:29.758 SO libspdk_event_bdev.so.6.0 00:03:29.758 SYMLINK libspdk_event_bdev.so 00:03:30.016 CC module/event/subsystems/scsi/scsi.o 00:03:30.016 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:30.016 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:30.016 CC module/event/subsystems/ublk/ublk.o 00:03:30.016 CC module/event/subsystems/nbd/nbd.o 00:03:30.016 LIB libspdk_event_nbd.a 00:03:30.016 LIB libspdk_event_ublk.a 00:03:30.016 LIB libspdk_event_scsi.a 00:03:30.273 SO libspdk_event_nbd.so.6.0 00:03:30.273 SO libspdk_event_ublk.so.3.0 00:03:30.273 SO libspdk_event_scsi.so.6.0 00:03:30.273 SYMLINK libspdk_event_ublk.so 00:03:30.273 LIB libspdk_event_nvmf.a 00:03:30.273 SYMLINK libspdk_event_nbd.so 00:03:30.273 SO libspdk_event_nvmf.so.6.0 00:03:30.273 SYMLINK libspdk_event_scsi.so 00:03:30.273 SYMLINK libspdk_event_nvmf.so 00:03:30.273 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.532 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.532 LIB libspdk_event_vhost_scsi.a 00:03:30.532 LIB libspdk_event_iscsi.a 00:03:30.532 SO libspdk_event_vhost_scsi.so.3.0 00:03:30.532 SO libspdk_event_iscsi.so.6.0 00:03:30.532 SYMLINK libspdk_event_vhost_scsi.so 00:03:30.532 SYMLINK libspdk_event_iscsi.so 00:03:30.791 SO libspdk.so.6.0 00:03:30.791 SYMLINK libspdk.so 00:03:30.791 CXX app/trace/trace.o 00:03:30.791 TEST_HEADER include/spdk/accel.h 00:03:30.791 TEST_HEADER include/spdk/accel_module.h 00:03:30.791 TEST_HEADER include/spdk/assert.h 00:03:30.791 TEST_HEADER include/spdk/barrier.h 00:03:30.791 CC app/trace_record/trace_record.o 00:03:30.791 TEST_HEADER include/spdk/base64.h 00:03:30.791 TEST_HEADER include/spdk/bdev.h 00:03:30.791 TEST_HEADER include/spdk/bdev_module.h 00:03:30.791 TEST_HEADER include/spdk/bdev_zone.h 00:03:30.791 TEST_HEADER include/spdk/bit_array.h 00:03:31.049 TEST_HEADER include/spdk/bit_pool.h 00:03:31.049 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.049 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.050 TEST_HEADER include/spdk/blobfs.h 00:03:31.050 TEST_HEADER include/spdk/blob.h 00:03:31.050 TEST_HEADER include/spdk/conf.h 00:03:31.050 TEST_HEADER include/spdk/config.h 00:03:31.050 TEST_HEADER include/spdk/cpuset.h 00:03:31.050 TEST_HEADER include/spdk/crc16.h 00:03:31.050 TEST_HEADER include/spdk/crc32.h 00:03:31.050 TEST_HEADER include/spdk/crc64.h 00:03:31.050 TEST_HEADER include/spdk/dif.h 00:03:31.050 CC app/iscsi_tgt/iscsi_tgt.o 00:03:31.050 TEST_HEADER include/spdk/dma.h 00:03:31.050 CC app/nvmf_tgt/nvmf_main.o 00:03:31.050 TEST_HEADER include/spdk/endian.h 00:03:31.050 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.050 TEST_HEADER include/spdk/env.h 00:03:31.050 TEST_HEADER include/spdk/event.h 00:03:31.050 TEST_HEADER include/spdk/fd_group.h 00:03:31.050 TEST_HEADER include/spdk/fd.h 00:03:31.050 CC app/spdk_tgt/spdk_tgt.o 00:03:31.050 TEST_HEADER include/spdk/file.h 00:03:31.050 TEST_HEADER include/spdk/fsdev.h 00:03:31.050 TEST_HEADER include/spdk/fsdev_module.h 00:03:31.050 TEST_HEADER include/spdk/ftl.h 00:03:31.050 CC examples/util/zipf/zipf.o 00:03:31.050 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:31.050 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.050 TEST_HEADER include/spdk/hexlify.h 00:03:31.050 TEST_HEADER include/spdk/histogram_data.h 00:03:31.050 CC test/thread/poller_perf/poller_perf.o 00:03:31.050 TEST_HEADER include/spdk/idxd.h 00:03:31.050 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.050 TEST_HEADER include/spdk/init.h 00:03:31.050 TEST_HEADER include/spdk/ioat.h 00:03:31.050 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.050 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.050 TEST_HEADER include/spdk/json.h 00:03:31.050 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.050 TEST_HEADER include/spdk/keyring.h 00:03:31.050 TEST_HEADER include/spdk/keyring_module.h 00:03:31.050 TEST_HEADER include/spdk/likely.h 00:03:31.050 TEST_HEADER include/spdk/log.h 00:03:31.050 CC test/app/bdev_svc/bdev_svc.o 00:03:31.050 TEST_HEADER include/spdk/lvol.h 00:03:31.050 TEST_HEADER include/spdk/md5.h 00:03:31.050 TEST_HEADER include/spdk/memory.h 00:03:31.050 TEST_HEADER include/spdk/mmio.h 00:03:31.050 CC test/dma/test_dma/test_dma.o 00:03:31.050 TEST_HEADER include/spdk/nbd.h 00:03:31.050 TEST_HEADER include/spdk/net.h 00:03:31.050 TEST_HEADER include/spdk/notify.h 00:03:31.050 TEST_HEADER include/spdk/nvme.h 00:03:31.050 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.050 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.050 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.050 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.050 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.050 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.050 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.050 TEST_HEADER include/spdk/nvmf.h 00:03:31.050 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.050 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.050 TEST_HEADER include/spdk/opal.h 00:03:31.050 TEST_HEADER include/spdk/opal_spec.h 00:03:31.050 TEST_HEADER include/spdk/pci_ids.h 00:03:31.050 TEST_HEADER include/spdk/pipe.h 00:03:31.050 TEST_HEADER include/spdk/queue.h 00:03:31.050 TEST_HEADER include/spdk/reduce.h 00:03:31.050 TEST_HEADER include/spdk/rpc.h 00:03:31.050 TEST_HEADER include/spdk/scheduler.h 00:03:31.050 TEST_HEADER include/spdk/scsi.h 00:03:31.050 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.050 TEST_HEADER include/spdk/sock.h 00:03:31.050 TEST_HEADER include/spdk/stdinc.h 00:03:31.050 TEST_HEADER include/spdk/string.h 00:03:31.050 TEST_HEADER include/spdk/thread.h 00:03:31.050 TEST_HEADER include/spdk/trace.h 00:03:31.050 TEST_HEADER include/spdk/trace_parser.h 00:03:31.050 TEST_HEADER include/spdk/tree.h 00:03:31.050 TEST_HEADER include/spdk/ublk.h 00:03:31.050 TEST_HEADER include/spdk/util.h 00:03:31.050 TEST_HEADER include/spdk/uuid.h 00:03:31.050 TEST_HEADER include/spdk/version.h 00:03:31.050 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.050 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.050 TEST_HEADER include/spdk/vhost.h 00:03:31.050 TEST_HEADER include/spdk/vmd.h 00:03:31.050 TEST_HEADER include/spdk/xor.h 00:03:31.050 TEST_HEADER include/spdk/zipf.h 00:03:31.050 CXX test/cpp_headers/accel.o 00:03:31.050 LINK nvmf_tgt 00:03:31.050 LINK zipf 00:03:31.050 LINK spdk_trace_record 00:03:31.050 LINK poller_perf 00:03:31.050 LINK iscsi_tgt 00:03:31.050 LINK spdk_tgt 00:03:31.050 LINK bdev_svc 00:03:31.308 CXX test/cpp_headers/accel_module.o 00:03:31.308 LINK spdk_trace 00:03:31.308 CXX test/cpp_headers/assert.o 00:03:31.308 CXX test/cpp_headers/barrier.o 00:03:31.308 CXX test/cpp_headers/base64.o 00:03:31.308 CC test/app/histogram_perf/histogram_perf.o 00:03:31.308 CC test/app/stub/stub.o 00:03:31.308 CC examples/ioat/perf/perf.o 00:03:31.308 CC test/app/jsoncat/jsoncat.o 00:03:31.308 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:31.565 CC app/spdk_lspci/spdk_lspci.o 00:03:31.566 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:31.566 CXX test/cpp_headers/bdev.o 00:03:31.566 LINK jsoncat 00:03:31.566 LINK test_dma 00:03:31.566 LINK histogram_perf 00:03:31.566 LINK stub 00:03:31.566 LINK spdk_lspci 00:03:31.566 LINK ioat_perf 00:03:31.566 CC test/env/mem_callbacks/mem_callbacks.o 00:03:31.566 CXX test/cpp_headers/bdev_module.o 00:03:31.825 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:31.825 CC app/spdk_nvme_perf/perf.o 00:03:31.825 CC test/rpc_client/rpc_client_test.o 00:03:31.825 CC examples/ioat/verify/verify.o 00:03:31.825 CC test/event/event_perf/event_perf.o 00:03:31.825 CXX test/cpp_headers/bdev_zone.o 00:03:31.825 LINK nvme_fuzz 00:03:31.825 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:31.825 CC test/accel/dif/dif.o 00:03:31.825 LINK event_perf 00:03:31.825 LINK rpc_client_test 00:03:32.084 CXX test/cpp_headers/bit_array.o 00:03:32.084 LINK verify 00:03:32.084 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.084 LINK mem_callbacks 00:03:32.084 CXX test/cpp_headers/bit_pool.o 00:03:32.084 CC test/event/reactor/reactor.o 00:03:32.084 CC test/event/reactor_perf/reactor_perf.o 00:03:32.084 LINK lsvmd 00:03:32.344 CC test/event/app_repeat/app_repeat.o 00:03:32.344 LINK vhost_fuzz 00:03:32.344 CC test/env/vtophys/vtophys.o 00:03:32.344 CXX test/cpp_headers/blob_bdev.o 00:03:32.344 LINK reactor_perf 00:03:32.344 LINK reactor 00:03:32.344 CC examples/vmd/led/led.o 00:03:32.344 CXX test/cpp_headers/blobfs_bdev.o 00:03:32.344 LINK app_repeat 00:03:32.344 LINK vtophys 00:03:32.344 CC app/spdk_nvme_identify/identify.o 00:03:32.344 CC test/event/scheduler/scheduler.o 00:03:32.605 CC app/spdk_nvme_discover/discovery_aer.o 00:03:32.605 LINK led 00:03:32.605 CXX test/cpp_headers/blobfs.o 00:03:32.605 LINK spdk_nvme_perf 00:03:32.605 CC app/spdk_top/spdk_top.o 00:03:32.605 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:32.605 LINK dif 00:03:32.605 LINK spdk_nvme_discover 00:03:32.605 CXX test/cpp_headers/blob.o 00:03:32.605 LINK scheduler 00:03:32.605 LINK env_dpdk_post_init 00:03:32.867 CC examples/idxd/perf/perf.o 00:03:32.867 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.867 CXX test/cpp_headers/conf.o 00:03:32.867 CC test/env/memory/memory_ut.o 00:03:32.867 CC test/env/pci/pci_ut.o 00:03:32.867 CXX test/cpp_headers/config.o 00:03:32.867 LINK interrupt_tgt 00:03:32.867 CC app/vhost/vhost.o 00:03:32.867 CXX test/cpp_headers/cpuset.o 00:03:33.126 CC examples/thread/thread/thread_ex.o 00:03:33.126 LINK idxd_perf 00:03:33.126 CXX test/cpp_headers/crc16.o 00:03:33.126 LINK vhost 00:03:33.126 LINK iscsi_fuzz 00:03:33.386 LINK thread 00:03:33.386 CXX test/cpp_headers/crc32.o 00:03:33.386 LINK spdk_nvme_identify 00:03:33.386 CC examples/sock/hello_world/hello_sock.o 00:03:33.386 LINK pci_ut 00:03:33.386 CC app/spdk_dd/spdk_dd.o 00:03:33.386 CXX test/cpp_headers/crc64.o 00:03:33.386 CC app/fio/nvme/fio_plugin.o 00:03:33.647 LINK spdk_top 00:03:33.647 LINK hello_sock 00:03:33.647 CC test/blobfs/mkfs/mkfs.o 00:03:33.647 CC app/fio/bdev/fio_plugin.o 00:03:33.647 CXX test/cpp_headers/dif.o 00:03:33.647 CC examples/accel/perf/accel_perf.o 00:03:33.647 LINK spdk_dd 00:03:33.647 LINK mkfs 00:03:33.647 CXX test/cpp_headers/dma.o 00:03:33.647 CC examples/blob/hello_world/hello_blob.o 00:03:33.910 CC examples/blob/cli/blobcli.o 00:03:33.910 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:33.910 CXX test/cpp_headers/endian.o 00:03:33.910 LINK memory_ut 00:03:33.910 LINK hello_blob 00:03:34.172 CXX test/cpp_headers/env_dpdk.o 00:03:34.172 CC examples/nvme/hello_world/hello_world.o 00:03:34.172 LINK spdk_bdev 00:03:34.172 LINK spdk_nvme 00:03:34.172 LINK accel_perf 00:03:34.172 LINK hello_fsdev 00:03:34.172 CC test/lvol/esnap/esnap.o 00:03:34.172 CXX test/cpp_headers/env.o 00:03:34.172 CC examples/nvme/reconnect/reconnect.o 00:03:34.172 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.433 CC test/nvme/aer/aer.o 00:03:34.433 LINK blobcli 00:03:34.433 LINK hello_world 00:03:34.433 CC test/bdev/bdevio/bdevio.o 00:03:34.433 CXX test/cpp_headers/event.o 00:03:34.433 CC test/nvme/reset/reset.o 00:03:34.433 CC test/nvme/sgl/sgl.o 00:03:34.433 CXX test/cpp_headers/fd_group.o 00:03:34.433 CC test/nvme/e2edp/nvme_dp.o 00:03:34.433 CC test/nvme/overhead/overhead.o 00:03:34.695 LINK aer 00:03:34.695 LINK reconnect 00:03:34.695 LINK reset 00:03:34.695 CXX test/cpp_headers/fd.o 00:03:34.695 LINK sgl 00:03:34.695 CXX test/cpp_headers/file.o 00:03:34.695 LINK bdevio 00:03:34.695 LINK nvme_dp 00:03:34.958 LINK nvme_manage 00:03:34.958 LINK overhead 00:03:34.958 CC examples/nvme/arbitration/arbitration.o 00:03:34.958 CC examples/nvme/hotplug/hotplug.o 00:03:34.958 CXX test/cpp_headers/fsdev.o 00:03:34.958 CC test/nvme/err_injection/err_injection.o 00:03:34.958 CXX test/cpp_headers/fsdev_module.o 00:03:34.958 CC examples/bdev/hello_world/hello_bdev.o 00:03:34.958 CC test/nvme/startup/startup.o 00:03:35.218 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.218 CC examples/nvme/abort/abort.o 00:03:35.218 CXX test/cpp_headers/ftl.o 00:03:35.218 CC test/nvme/reserve/reserve.o 00:03:35.218 LINK hotplug 00:03:35.218 LINK err_injection 00:03:35.218 LINK arbitration 00:03:35.218 LINK startup 00:03:35.218 LINK hello_bdev 00:03:35.218 LINK cmb_copy 00:03:35.218 CXX test/cpp_headers/fuse_dispatcher.o 00:03:35.477 CXX test/cpp_headers/gpt_spec.o 00:03:35.477 CC test/nvme/simple_copy/simple_copy.o 00:03:35.477 LINK reserve 00:03:35.477 CC test/nvme/connect_stress/connect_stress.o 00:03:35.477 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.477 CC test/nvme/boot_partition/boot_partition.o 00:03:35.477 LINK abort 00:03:35.477 CXX test/cpp_headers/hexlify.o 00:03:35.477 CC examples/bdev/bdevperf/bdevperf.o 00:03:35.477 CC test/nvme/compliance/nvme_compliance.o 00:03:35.477 LINK connect_stress 00:03:35.477 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.477 LINK pmr_persistence 00:03:35.738 LINK simple_copy 00:03:35.738 LINK boot_partition 00:03:35.738 CXX test/cpp_headers/histogram_data.o 00:03:35.738 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:35.738 CXX test/cpp_headers/idxd.o 00:03:35.738 CC test/nvme/fdp/fdp.o 00:03:35.738 CXX test/cpp_headers/idxd_spec.o 00:03:35.738 LINK fused_ordering 00:03:35.738 CXX test/cpp_headers/init.o 00:03:35.738 CC test/nvme/cuse/cuse.o 00:03:35.738 LINK nvme_compliance 00:03:35.738 LINK doorbell_aers 00:03:35.998 CXX test/cpp_headers/ioat.o 00:03:35.998 CXX test/cpp_headers/ioat_spec.o 00:03:35.998 CXX test/cpp_headers/iscsi_spec.o 00:03:35.998 CXX test/cpp_headers/json.o 00:03:35.998 CXX test/cpp_headers/jsonrpc.o 00:03:35.998 CXX test/cpp_headers/keyring.o 00:03:35.998 CXX test/cpp_headers/keyring_module.o 00:03:35.998 CXX test/cpp_headers/likely.o 00:03:35.998 CXX test/cpp_headers/log.o 00:03:35.998 LINK fdp 00:03:35.998 CXX test/cpp_headers/lvol.o 00:03:35.998 CXX test/cpp_headers/md5.o 00:03:35.998 CXX test/cpp_headers/memory.o 00:03:36.256 CXX test/cpp_headers/mmio.o 00:03:36.256 CXX test/cpp_headers/nbd.o 00:03:36.256 CXX test/cpp_headers/net.o 00:03:36.256 CXX test/cpp_headers/notify.o 00:03:36.256 CXX test/cpp_headers/nvme.o 00:03:36.256 CXX test/cpp_headers/nvme_intel.o 00:03:36.256 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.256 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.256 CXX test/cpp_headers/nvme_spec.o 00:03:36.256 CXX test/cpp_headers/nvme_zns.o 00:03:36.256 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.256 LINK bdevperf 00:03:36.256 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.514 CXX test/cpp_headers/nvmf.o 00:03:36.514 CXX test/cpp_headers/nvmf_spec.o 00:03:36.514 CXX test/cpp_headers/nvmf_transport.o 00:03:36.514 CXX test/cpp_headers/opal.o 00:03:36.514 CXX test/cpp_headers/opal_spec.o 00:03:36.514 CXX test/cpp_headers/pci_ids.o 00:03:36.514 CXX test/cpp_headers/pipe.o 00:03:36.514 CXX test/cpp_headers/queue.o 00:03:36.514 CXX test/cpp_headers/reduce.o 00:03:36.514 CXX test/cpp_headers/rpc.o 00:03:36.514 CXX test/cpp_headers/scheduler.o 00:03:36.514 CXX test/cpp_headers/scsi.o 00:03:36.514 CXX test/cpp_headers/scsi_spec.o 00:03:36.514 CXX test/cpp_headers/sock.o 00:03:36.771 CC examples/nvmf/nvmf/nvmf.o 00:03:36.771 CXX test/cpp_headers/stdinc.o 00:03:36.771 CXX test/cpp_headers/string.o 00:03:36.771 CXX test/cpp_headers/thread.o 00:03:36.771 CXX test/cpp_headers/trace.o 00:03:36.771 CXX test/cpp_headers/trace_parser.o 00:03:36.771 CXX test/cpp_headers/tree.o 00:03:36.771 CXX test/cpp_headers/ublk.o 00:03:36.771 CXX test/cpp_headers/util.o 00:03:36.771 CXX test/cpp_headers/uuid.o 00:03:36.771 CXX test/cpp_headers/version.o 00:03:36.771 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.771 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.771 CXX test/cpp_headers/vhost.o 00:03:37.030 CXX test/cpp_headers/vmd.o 00:03:37.030 CXX test/cpp_headers/xor.o 00:03:37.030 CXX test/cpp_headers/zipf.o 00:03:37.030 LINK cuse 00:03:37.030 LINK nvmf 00:03:39.622 LINK esnap 00:03:39.622 00:03:39.622 real 1m8.049s 00:03:39.622 user 6m16.824s 00:03:39.622 sys 1m5.442s 00:03:39.622 ************************************ 00:03:39.622 END TEST make 00:03:39.622 ************************************ 00:03:39.622 11:46:37 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:39.622 11:46:37 make -- common/autotest_common.sh@10 -- $ set +x 00:03:39.881 11:46:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:39.881 11:46:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:39.881 11:46:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:39.881 11:46:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.881 11:46:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:39.881 11:46:37 -- pm/common@44 -- $ pid=5065 00:03:39.881 11:46:37 -- pm/common@50 -- $ kill -TERM 5065 00:03:39.881 11:46:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.881 11:46:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:39.881 11:46:37 -- pm/common@44 -- $ pid=5066 00:03:39.881 11:46:37 -- pm/common@50 -- $ kill -TERM 5066 00:03:39.881 11:46:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:39.881 11:46:37 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:39.881 11:46:37 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:39.881 11:46:37 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:39.881 11:46:37 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:39.881 11:46:37 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:39.881 11:46:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.881 11:46:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.881 11:46:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.881 11:46:37 -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.882 11:46:37 -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.882 11:46:37 -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.882 11:46:37 -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.882 11:46:37 -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.882 11:46:37 -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.882 11:46:37 -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.882 11:46:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.882 11:46:37 -- scripts/common.sh@344 -- # case "$op" in 00:03:39.882 11:46:37 -- scripts/common.sh@345 -- # : 1 00:03:39.882 11:46:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.882 11:46:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.882 11:46:37 -- scripts/common.sh@365 -- # decimal 1 00:03:39.882 11:46:37 -- scripts/common.sh@353 -- # local d=1 00:03:39.882 11:46:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.882 11:46:37 -- scripts/common.sh@355 -- # echo 1 00:03:39.882 11:46:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.882 11:46:37 -- scripts/common.sh@366 -- # decimal 2 00:03:39.882 11:46:37 -- scripts/common.sh@353 -- # local d=2 00:03:39.882 11:46:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.882 11:46:37 -- scripts/common.sh@355 -- # echo 2 00:03:39.882 11:46:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.882 11:46:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.882 11:46:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.882 11:46:37 -- scripts/common.sh@368 -- # return 0 00:03:39.882 11:46:37 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.882 11:46:37 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:39.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.882 --rc genhtml_branch_coverage=1 00:03:39.882 --rc genhtml_function_coverage=1 00:03:39.882 --rc genhtml_legend=1 00:03:39.882 --rc geninfo_all_blocks=1 00:03:39.882 --rc geninfo_unexecuted_blocks=1 00:03:39.882 00:03:39.882 ' 00:03:39.882 11:46:37 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:39.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.882 --rc genhtml_branch_coverage=1 00:03:39.882 --rc genhtml_function_coverage=1 00:03:39.882 --rc genhtml_legend=1 00:03:39.882 --rc geninfo_all_blocks=1 00:03:39.882 --rc geninfo_unexecuted_blocks=1 00:03:39.882 00:03:39.882 ' 00:03:39.882 11:46:37 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:39.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.882 --rc genhtml_branch_coverage=1 00:03:39.882 --rc genhtml_function_coverage=1 00:03:39.882 --rc genhtml_legend=1 00:03:39.882 --rc geninfo_all_blocks=1 00:03:39.882 --rc geninfo_unexecuted_blocks=1 00:03:39.882 00:03:39.882 ' 00:03:39.882 11:46:37 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:39.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.882 --rc genhtml_branch_coverage=1 00:03:39.882 --rc genhtml_function_coverage=1 00:03:39.882 --rc genhtml_legend=1 00:03:39.882 --rc geninfo_all_blocks=1 00:03:39.882 --rc geninfo_unexecuted_blocks=1 00:03:39.882 00:03:39.882 ' 00:03:39.882 11:46:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:39.882 11:46:37 -- nvmf/common.sh@7 -- # uname -s 00:03:39.882 11:46:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.882 11:46:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.882 11:46:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.882 11:46:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.882 11:46:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.882 11:46:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.882 11:46:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.882 11:46:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.882 11:46:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.882 11:46:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.882 11:46:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cd5049a-d906-44d6-9f42-91ef1a8b3187 00:03:39.882 11:46:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cd5049a-d906-44d6-9f42-91ef1a8b3187 00:03:39.882 11:46:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.882 11:46:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.882 11:46:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:39.882 11:46:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.882 11:46:37 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:39.882 11:46:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:39.882 11:46:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.882 11:46:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.882 11:46:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.882 11:46:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.882 11:46:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.882 11:46:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.882 11:46:37 -- paths/export.sh@5 -- # export PATH 00:03:39.882 11:46:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.882 11:46:37 -- nvmf/common.sh@51 -- # : 0 00:03:39.882 11:46:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:39.882 11:46:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:39.882 11:46:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.882 11:46:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.882 11:46:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.882 11:46:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:39.882 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:39.882 11:46:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:39.882 11:46:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:39.882 11:46:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:39.882 11:46:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:39.882 11:46:37 -- spdk/autotest.sh@32 -- # uname -s 00:03:39.882 11:46:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:39.882 11:46:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:39.882 11:46:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:39.882 11:46:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:39.882 11:46:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:39.882 11:46:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.143 11:46:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.143 11:46:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.143 11:46:37 -- spdk/autotest.sh@48 -- # udevadm_pid=54255 00:03:40.143 11:46:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.143 11:46:37 -- pm/common@17 -- # local monitor 00:03:40.143 11:46:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.143 11:46:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.143 11:46:37 -- pm/common@25 -- # sleep 1 00:03:40.143 11:46:37 -- pm/common@21 -- # date +%s 00:03:40.143 11:46:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.143 11:46:37 -- pm/common@21 -- # date +%s 00:03:40.143 11:46:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731930397 00:03:40.143 11:46:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731930397 00:03:40.143 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731930397_collect-cpu-load.pm.log 00:03:40.143 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731930397_collect-vmstat.pm.log 00:03:41.085 11:46:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.085 11:46:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.085 11:46:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:41.085 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:03:41.085 11:46:38 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.085 11:46:38 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:41.085 11:46:38 -- common/autotest_common.sh@10 -- # set +x 00:03:41.085 11:46:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:41.085 11:46:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:41.085 11:46:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:41.085 11:46:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.085 11:46:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:41.085 11:46:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.085 11:46:38 -- common/autotest_common.sh@1455 -- # uname 00:03:41.085 11:46:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:41.085 11:46:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.085 11:46:38 -- common/autotest_common.sh@1475 -- # uname 00:03:41.085 11:46:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:41.085 11:46:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:41.085 11:46:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:41.085 lcov: LCOV version 1.15 00:03:41.085 11:46:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:55.980 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:55.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:10.883 11:47:08 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:10.883 11:47:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.883 11:47:08 -- common/autotest_common.sh@10 -- # set +x 00:04:10.883 11:47:08 -- spdk/autotest.sh@78 -- # rm -f 00:04:10.883 11:47:08 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.456 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.024 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:12.024 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:12.024 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:12.024 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:12.024 11:47:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.024 11:47:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:12.024 11:47:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:12.024 11:47:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:12.024 11:47:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.024 11:47:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:12.024 11:47:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.025 11:47:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.025 11:47:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.025 11:47:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:12.025 11:47:09 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:12.025 11:47:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.025 11:47:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:12.025 11:47:09 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:12.025 11:47:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.025 11:47:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.025 11:47:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:12.025 11:47:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:12.025 11:47:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.025 11:47:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.025 11:47:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.025 11:47:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.025 11:47:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.025 11:47:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.025 11:47:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.025 No valid GPT data, bailing 00:04:12.025 11:47:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.025 11:47:09 -- scripts/common.sh@394 -- # pt= 00:04:12.025 11:47:09 -- scripts/common.sh@395 -- # return 1 00:04:12.025 11:47:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.025 1+0 records in 00:04:12.025 1+0 records out 00:04:12.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267622 s, 39.2 MB/s 00:04:12.025 11:47:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.025 11:47:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.025 11:47:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:12.025 11:47:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:12.025 11:47:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:12.025 No valid GPT data, bailing 00:04:12.025 11:47:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:12.025 11:47:09 -- scripts/common.sh@394 -- # pt= 00:04:12.025 11:47:09 -- scripts/common.sh@395 -- # return 1 00:04:12.025 11:47:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:12.025 1+0 records in 00:04:12.025 1+0 records out 00:04:12.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427782 s, 245 MB/s 00:04:12.025 11:47:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.025 11:47:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.025 11:47:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:12.025 11:47:09 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:12.025 11:47:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:12.283 No valid GPT data, bailing 00:04:12.283 11:47:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:12.283 11:47:09 -- scripts/common.sh@394 -- # pt= 00:04:12.283 11:47:09 -- scripts/common.sh@395 -- # return 1 00:04:12.283 11:47:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:12.283 1+0 records in 00:04:12.283 1+0 records out 00:04:12.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00563254 s, 186 MB/s 00:04:12.283 11:47:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.283 11:47:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.283 11:47:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:12.283 11:47:09 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:12.283 11:47:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:12.283 No valid GPT data, bailing 00:04:12.283 11:47:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:12.283 11:47:09 -- scripts/common.sh@394 -- # pt= 00:04:12.283 11:47:09 -- scripts/common.sh@395 -- # return 1 00:04:12.283 11:47:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:12.283 1+0 records in 00:04:12.283 1+0 records out 00:04:12.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579384 s, 181 MB/s 00:04:12.283 11:47:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.283 11:47:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.283 11:47:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:12.283 11:47:09 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:12.283 11:47:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:12.283 No valid GPT data, bailing 00:04:12.283 11:47:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:12.283 11:47:09 -- scripts/common.sh@394 -- # pt= 00:04:12.283 11:47:09 -- scripts/common.sh@395 -- # return 1 00:04:12.283 11:47:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:12.283 1+0 records in 00:04:12.283 1+0 records out 00:04:12.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557601 s, 188 MB/s 00:04:12.283 11:47:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.283 11:47:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.283 11:47:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:12.283 11:47:09 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:12.283 11:47:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:12.283 No valid GPT data, bailing 00:04:12.283 11:47:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:12.540 11:47:09 -- scripts/common.sh@394 -- # pt= 00:04:12.540 11:47:09 -- scripts/common.sh@395 -- # return 1 00:04:12.540 11:47:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:12.540 1+0 records in 00:04:12.540 1+0 records out 00:04:12.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434219 s, 241 MB/s 00:04:12.540 11:47:09 -- spdk/autotest.sh@105 -- # sync 00:04:12.540 11:47:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.540 11:47:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.540 11:47:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:13.923 11:47:11 -- spdk/autotest.sh@111 -- # uname -s 00:04:13.923 11:47:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:13.923 11:47:11 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:13.923 11:47:11 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:14.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.748 Hugepages 00:04:14.748 node hugesize free / total 00:04:14.748 node0 1048576kB 0 / 0 00:04:14.748 node0 2048kB 0 / 0 00:04:14.748 00:04:14.748 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:14.748 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:14.748 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:15.007 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:15.007 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:15.007 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:15.007 11:47:12 -- spdk/autotest.sh@117 -- # uname -s 00:04:15.007 11:47:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:15.007 11:47:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:15.007 11:47:12 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.830 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.830 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.830 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.087 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.087 11:47:13 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:17.019 11:47:14 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:17.019 11:47:14 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:17.019 11:47:14 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:17.019 11:47:14 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:17.019 11:47:14 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:17.019 11:47:14 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:17.019 11:47:14 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:17.019 11:47:14 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:17.019 11:47:14 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:17.019 11:47:14 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:17.019 11:47:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:17.019 11:47:14 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.533 Waiting for block devices as requested 00:04:17.533 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.533 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.533 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.794 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.080 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:23.080 11:47:20 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:23.080 11:47:20 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:23.080 11:47:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:23.080 11:47:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:23.080 11:47:20 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:23.080 11:47:20 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:23.080 11:47:20 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:23.080 11:47:20 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:23.080 11:47:20 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1541 -- # continue 00:04:23.080 11:47:20 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:23.080 11:47:20 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:23.080 11:47:20 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:23.080 11:47:20 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:23.080 11:47:20 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1541 -- # continue 00:04:23.080 11:47:20 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:23.080 11:47:20 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:23.080 11:47:20 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:23.080 11:47:20 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:23.080 11:47:20 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:23.080 11:47:20 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1541 -- # continue 00:04:23.080 11:47:20 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:23.080 11:47:20 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.080 11:47:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:23.080 11:47:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:23.080 11:47:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:23.081 11:47:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:23.081 11:47:20 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:23.081 11:47:20 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:23.081 11:47:20 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:23.081 11:47:20 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:23.081 11:47:20 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:23.081 11:47:20 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:23.081 11:47:20 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:23.081 11:47:20 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:23.081 11:47:20 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:23.081 11:47:20 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:23.081 11:47:20 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:23.081 11:47:20 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:23.081 11:47:20 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:23.081 11:47:20 -- common/autotest_common.sh@1541 -- # continue 00:04:23.081 11:47:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:23.081 11:47:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.081 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:23.081 11:47:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:23.081 11:47:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:23.081 11:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:23.081 11:47:20 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.909 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.909 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.909 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.167 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.167 11:47:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:24.167 11:47:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.167 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:04:24.167 11:47:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:24.167 11:47:21 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:24.167 11:47:21 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.167 11:47:21 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:24.167 11:47:21 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:24.167 11:47:21 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:24.167 11:47:21 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:24.167 11:47:21 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:24.167 11:47:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:24.167 11:47:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:24.167 11:47:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.167 11:47:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:24.167 11:47:21 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.167 11:47:21 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:24.167 11:47:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:24.167 11:47:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:24.167 11:47:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:24.167 11:47:21 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:24.167 11:47:21 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.167 11:47:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:24.167 11:47:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:24.167 11:47:21 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:24.167 11:47:21 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.167 11:47:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:24.167 11:47:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:24.167 11:47:21 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:24.167 11:47:21 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.167 11:47:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:24.167 11:47:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:24.167 11:47:21 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:24.167 11:47:21 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.167 11:47:21 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:24.167 11:47:21 -- common/autotest_common.sh@1570 -- # return 0 00:04:24.167 11:47:21 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:24.167 11:47:21 -- common/autotest_common.sh@1578 -- # return 0 00:04:24.167 11:47:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:24.167 11:47:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:24.167 11:47:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:24.167 11:47:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:24.167 11:47:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:24.167 11:47:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.167 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:04:24.167 11:47:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:24.167 11:47:21 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.167 11:47:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.167 11:47:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.167 11:47:21 -- common/autotest_common.sh@10 -- # set +x 00:04:24.167 ************************************ 00:04:24.167 START TEST env 00:04:24.167 ************************************ 00:04:24.167 11:47:21 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.167 * Looking for test storage... 00:04:24.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:24.167 11:47:21 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:24.167 11:47:21 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:24.167 11:47:21 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:24.426 11:47:21 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:24.426 11:47:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.426 11:47:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.426 11:47:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.426 11:47:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.426 11:47:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.426 11:47:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.426 11:47:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.426 11:47:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.426 11:47:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.426 11:47:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.426 11:47:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.426 11:47:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:24.426 11:47:21 env -- scripts/common.sh@345 -- # : 1 00:04:24.426 11:47:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.426 11:47:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.426 11:47:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:24.426 11:47:21 env -- scripts/common.sh@353 -- # local d=1 00:04:24.426 11:47:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.426 11:47:21 env -- scripts/common.sh@355 -- # echo 1 00:04:24.426 11:47:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.426 11:47:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:24.426 11:47:21 env -- scripts/common.sh@353 -- # local d=2 00:04:24.426 11:47:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.426 11:47:21 env -- scripts/common.sh@355 -- # echo 2 00:04:24.426 11:47:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.426 11:47:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.426 11:47:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.426 11:47:21 env -- scripts/common.sh@368 -- # return 0 00:04:24.426 11:47:21 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.426 11:47:21 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:24.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.426 --rc genhtml_branch_coverage=1 00:04:24.426 --rc genhtml_function_coverage=1 00:04:24.426 --rc genhtml_legend=1 00:04:24.426 --rc geninfo_all_blocks=1 00:04:24.426 --rc geninfo_unexecuted_blocks=1 00:04:24.426 00:04:24.426 ' 00:04:24.426 11:47:21 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:24.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.426 --rc genhtml_branch_coverage=1 00:04:24.426 --rc genhtml_function_coverage=1 00:04:24.426 --rc genhtml_legend=1 00:04:24.426 --rc geninfo_all_blocks=1 00:04:24.426 --rc geninfo_unexecuted_blocks=1 00:04:24.426 00:04:24.426 ' 00:04:24.426 11:47:21 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:24.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.426 --rc genhtml_branch_coverage=1 00:04:24.426 --rc genhtml_function_coverage=1 00:04:24.426 --rc genhtml_legend=1 00:04:24.426 --rc geninfo_all_blocks=1 00:04:24.426 --rc geninfo_unexecuted_blocks=1 00:04:24.426 00:04:24.426 ' 00:04:24.426 11:47:21 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:24.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.426 --rc genhtml_branch_coverage=1 00:04:24.426 --rc genhtml_function_coverage=1 00:04:24.426 --rc genhtml_legend=1 00:04:24.426 --rc geninfo_all_blocks=1 00:04:24.426 --rc geninfo_unexecuted_blocks=1 00:04:24.426 00:04:24.426 ' 00:04:24.426 11:47:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:24.426 11:47:21 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.426 11:47:21 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.426 11:47:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.426 ************************************ 00:04:24.426 START TEST env_memory 00:04:24.426 ************************************ 00:04:24.426 11:47:21 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:24.426 00:04:24.426 00:04:24.426 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.426 http://cunit.sourceforge.net/ 00:04:24.426 00:04:24.426 00:04:24.426 Suite: memory 00:04:24.426 Test: alloc and free memory map ...[2024-11-18 11:47:21.987491] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:24.426 passed 00:04:24.426 Test: mem map translation ...[2024-11-18 11:47:22.026285] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:24.426 [2024-11-18 11:47:22.026388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:24.426 [2024-11-18 11:47:22.026491] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:24.426 [2024-11-18 11:47:22.026624] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:24.426 passed 00:04:24.426 Test: mem map registration ...[2024-11-18 11:47:22.094687] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:24.426 [2024-11-18 11:47:22.094787] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:24.426 passed 00:04:24.684 Test: mem map adjacent registrations ...passed 00:04:24.684 00:04:24.684 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.684 suites 1 1 n/a 0 0 00:04:24.684 tests 4 4 4 0 0 00:04:24.684 asserts 152 152 152 0 n/a 00:04:24.684 00:04:24.684 Elapsed time = 0.232 seconds 00:04:24.684 00:04:24.684 real 0m0.267s 00:04:24.684 user 0m0.241s 00:04:24.684 sys 0m0.018s 00:04:24.684 11:47:22 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.684 11:47:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:24.684 ************************************ 00:04:24.684 END TEST env_memory 00:04:24.684 ************************************ 00:04:24.684 11:47:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:24.684 11:47:22 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.684 11:47:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.684 11:47:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.684 ************************************ 00:04:24.684 START TEST env_vtophys 00:04:24.684 ************************************ 00:04:24.684 11:47:22 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:24.684 EAL: lib.eal log level changed from notice to debug 00:04:24.684 EAL: Detected lcore 0 as core 0 on socket 0 00:04:24.684 EAL: Detected lcore 1 as core 0 on socket 0 00:04:24.684 EAL: Detected lcore 2 as core 0 on socket 0 00:04:24.684 EAL: Detected lcore 3 as core 0 on socket 0 00:04:24.684 EAL: Detected lcore 4 as core 0 on socket 0 00:04:24.684 EAL: Detected lcore 5 as core 0 on socket 0 00:04:24.684 EAL: Detected lcore 6 as core 0 on socket 0 00:04:24.685 EAL: Detected lcore 7 as core 0 on socket 0 00:04:24.685 EAL: Detected lcore 8 as core 0 on socket 0 00:04:24.685 EAL: Detected lcore 9 as core 0 on socket 0 00:04:24.685 EAL: Maximum logical cores by configuration: 128 00:04:24.685 EAL: Detected CPU lcores: 10 00:04:24.685 EAL: Detected NUMA nodes: 1 00:04:24.685 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:24.685 EAL: Detected shared linkage of DPDK 00:04:24.685 EAL: No shared files mode enabled, IPC will be disabled 00:04:24.685 EAL: Selected IOVA mode 'PA' 00:04:24.685 EAL: Probing VFIO support... 00:04:24.685 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:24.685 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:24.685 EAL: Ask a virtual area of 0x2e000 bytes 00:04:24.685 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:24.685 EAL: Setting up physically contiguous memory... 00:04:24.685 EAL: Setting maximum number of open files to 524288 00:04:24.685 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:24.685 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:24.685 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.685 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:24.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.685 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.685 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:24.685 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:24.685 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.685 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:24.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.685 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.685 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:24.685 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:24.685 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.685 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:24.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.685 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.685 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:24.685 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:24.685 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.685 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:24.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.685 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.685 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:24.685 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:24.685 EAL: Hugepages will be freed exactly as allocated. 00:04:24.685 EAL: No shared files mode enabled, IPC is disabled 00:04:24.685 EAL: No shared files mode enabled, IPC is disabled 00:04:24.943 EAL: TSC frequency is ~2600000 KHz 00:04:24.943 EAL: Main lcore 0 is ready (tid=7f943dcd7a40;cpuset=[0]) 00:04:24.943 EAL: Trying to obtain current memory policy. 00:04:24.943 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.943 EAL: Restoring previous memory policy: 0 00:04:24.943 EAL: request: mp_malloc_sync 00:04:24.943 EAL: No shared files mode enabled, IPC is disabled 00:04:24.943 EAL: Heap on socket 0 was expanded by 2MB 00:04:24.943 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:24.943 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:24.943 EAL: Mem event callback 'spdk:(nil)' registered 00:04:24.943 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:24.943 00:04:24.943 00:04:24.943 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.943 http://cunit.sourceforge.net/ 00:04:24.943 00:04:24.943 00:04:24.943 Suite: components_suite 00:04:25.201 Test: vtophys_malloc_test ...passed 00:04:25.201 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:25.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.201 EAL: Restoring previous memory policy: 4 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was expanded by 4MB 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was shrunk by 4MB 00:04:25.201 EAL: Trying to obtain current memory policy. 00:04:25.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.201 EAL: Restoring previous memory policy: 4 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was expanded by 6MB 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was shrunk by 6MB 00:04:25.201 EAL: Trying to obtain current memory policy. 00:04:25.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.201 EAL: Restoring previous memory policy: 4 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was expanded by 10MB 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was shrunk by 10MB 00:04:25.201 EAL: Trying to obtain current memory policy. 00:04:25.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.201 EAL: Restoring previous memory policy: 4 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was expanded by 18MB 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was shrunk by 18MB 00:04:25.201 EAL: Trying to obtain current memory policy. 00:04:25.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.201 EAL: Restoring previous memory policy: 4 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was expanded by 34MB 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was shrunk by 34MB 00:04:25.201 EAL: Trying to obtain current memory policy. 00:04:25.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.201 EAL: Restoring previous memory policy: 4 00:04:25.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.201 EAL: request: mp_malloc_sync 00:04:25.201 EAL: No shared files mode enabled, IPC is disabled 00:04:25.201 EAL: Heap on socket 0 was expanded by 66MB 00:04:25.459 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.459 EAL: request: mp_malloc_sync 00:04:25.459 EAL: No shared files mode enabled, IPC is disabled 00:04:25.459 EAL: Heap on socket 0 was shrunk by 66MB 00:04:25.459 EAL: Trying to obtain current memory policy. 00:04:25.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.459 EAL: Restoring previous memory policy: 4 00:04:25.459 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.459 EAL: request: mp_malloc_sync 00:04:25.459 EAL: No shared files mode enabled, IPC is disabled 00:04:25.459 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.718 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.718 EAL: request: mp_malloc_sync 00:04:25.718 EAL: No shared files mode enabled, IPC is disabled 00:04:25.718 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.718 EAL: Trying to obtain current memory policy. 00:04:25.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.718 EAL: Restoring previous memory policy: 4 00:04:25.718 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.718 EAL: request: mp_malloc_sync 00:04:25.718 EAL: No shared files mode enabled, IPC is disabled 00:04:25.718 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.976 EAL: request: mp_malloc_sync 00:04:25.976 EAL: No shared files mode enabled, IPC is disabled 00:04:25.976 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.235 EAL: Trying to obtain current memory policy. 00:04:26.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.494 EAL: Restoring previous memory policy: 4 00:04:26.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.494 EAL: request: mp_malloc_sync 00:04:26.494 EAL: No shared files mode enabled, IPC is disabled 00:04:26.494 EAL: Heap on socket 0 was expanded by 514MB 00:04:27.061 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.061 EAL: request: mp_malloc_sync 00:04:27.061 EAL: No shared files mode enabled, IPC is disabled 00:04:27.061 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.626 EAL: Trying to obtain current memory policy. 00:04:27.626 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.626 EAL: Restoring previous memory policy: 4 00:04:27.626 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.626 EAL: request: mp_malloc_sync 00:04:27.626 EAL: No shared files mode enabled, IPC is disabled 00:04:27.626 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.001 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.001 EAL: request: mp_malloc_sync 00:04:29.001 EAL: No shared files mode enabled, IPC is disabled 00:04:29.001 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.950 passed 00:04:29.950 00:04:29.950 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.950 suites 1 1 n/a 0 0 00:04:29.950 tests 2 2 2 0 0 00:04:29.950 asserts 5593 5593 5593 0 n/a 00:04:29.950 00:04:29.950 Elapsed time = 4.831 seconds 00:04:29.950 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.950 EAL: request: mp_malloc_sync 00:04:29.950 EAL: No shared files mode enabled, IPC is disabled 00:04:29.950 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.950 EAL: No shared files mode enabled, IPC is disabled 00:04:29.950 EAL: No shared files mode enabled, IPC is disabled 00:04:29.950 EAL: No shared files mode enabled, IPC is disabled 00:04:29.950 00:04:29.950 real 0m5.090s 00:04:29.950 user 0m4.299s 00:04:29.950 sys 0m0.647s 00:04:29.950 11:47:27 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.950 11:47:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:29.950 ************************************ 00:04:29.950 END TEST env_vtophys 00:04:29.950 ************************************ 00:04:29.950 11:47:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:29.950 11:47:27 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.950 11:47:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.950 11:47:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.950 ************************************ 00:04:29.950 START TEST env_pci 00:04:29.950 ************************************ 00:04:29.950 11:47:27 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:29.950 00:04:29.950 00:04:29.950 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.950 http://cunit.sourceforge.net/ 00:04:29.950 00:04:29.950 00:04:29.950 Suite: pci 00:04:29.950 Test: pci_hook ...[2024-11-18 11:47:27.397125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57006 has claimed it 00:04:29.950 passed 00:04:29.950 00:04:29.950 EAL: Cannot find device (10000:00:01.0) 00:04:29.950 EAL: Failed to attach device on primary process 00:04:29.950 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.950 suites 1 1 n/a 0 0 00:04:29.950 tests 1 1 1 0 0 00:04:29.950 asserts 25 25 25 0 n/a 00:04:29.950 00:04:29.950 Elapsed time = 0.006 seconds 00:04:29.950 00:04:29.950 real 0m0.068s 00:04:29.950 user 0m0.030s 00:04:29.950 sys 0m0.037s 00:04:29.950 11:47:27 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.950 11:47:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:29.950 ************************************ 00:04:29.950 END TEST env_pci 00:04:29.950 ************************************ 00:04:29.950 11:47:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.950 11:47:27 env -- env/env.sh@15 -- # uname 00:04:29.950 11:47:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.950 11:47:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.950 11:47:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.950 11:47:27 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:29.950 11:47:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.950 11:47:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.950 ************************************ 00:04:29.950 START TEST env_dpdk_post_init 00:04:29.950 ************************************ 00:04:29.950 11:47:27 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.950 EAL: Detected CPU lcores: 10 00:04:29.950 EAL: Detected NUMA nodes: 1 00:04:29.950 EAL: Detected shared linkage of DPDK 00:04:29.950 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.950 EAL: Selected IOVA mode 'PA' 00:04:30.211 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:30.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:30.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:30.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:30.211 Starting DPDK initialization... 00:04:30.211 Starting SPDK post initialization... 00:04:30.211 SPDK NVMe probe 00:04:30.211 Attaching to 0000:00:10.0 00:04:30.211 Attaching to 0000:00:11.0 00:04:30.211 Attaching to 0000:00:12.0 00:04:30.211 Attaching to 0000:00:13.0 00:04:30.211 Attached to 0000:00:10.0 00:04:30.211 Attached to 0000:00:11.0 00:04:30.211 Attached to 0000:00:13.0 00:04:30.211 Attached to 0000:00:12.0 00:04:30.211 Cleaning up... 00:04:30.211 ************************************ 00:04:30.211 END TEST env_dpdk_post_init 00:04:30.211 ************************************ 00:04:30.211 00:04:30.211 real 0m0.230s 00:04:30.211 user 0m0.065s 00:04:30.211 sys 0m0.067s 00:04:30.211 11:47:27 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.211 11:47:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.211 11:47:27 env -- env/env.sh@26 -- # uname 00:04:30.211 11:47:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:30.211 11:47:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.211 11:47:27 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.211 11:47:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.211 11:47:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.211 ************************************ 00:04:30.211 START TEST env_mem_callbacks 00:04:30.211 ************************************ 00:04:30.211 11:47:27 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.211 EAL: Detected CPU lcores: 10 00:04:30.211 EAL: Detected NUMA nodes: 1 00:04:30.211 EAL: Detected shared linkage of DPDK 00:04:30.211 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.211 EAL: Selected IOVA mode 'PA' 00:04:30.470 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.470 00:04:30.470 00:04:30.470 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.470 http://cunit.sourceforge.net/ 00:04:30.470 00:04:30.470 00:04:30.470 Suite: memory 00:04:30.470 Test: test ... 00:04:30.470 register 0x200000200000 2097152 00:04:30.470 malloc 3145728 00:04:30.470 register 0x200000400000 4194304 00:04:30.470 buf 0x2000004fffc0 len 3145728 PASSED 00:04:30.470 malloc 64 00:04:30.470 buf 0x2000004ffec0 len 64 PASSED 00:04:30.470 malloc 4194304 00:04:30.470 register 0x200000800000 6291456 00:04:30.470 buf 0x2000009fffc0 len 4194304 PASSED 00:04:30.470 free 0x2000004fffc0 3145728 00:04:30.470 free 0x2000004ffec0 64 00:04:30.470 unregister 0x200000400000 4194304 PASSED 00:04:30.470 free 0x2000009fffc0 4194304 00:04:30.470 unregister 0x200000800000 6291456 PASSED 00:04:30.470 malloc 8388608 00:04:30.470 register 0x200000400000 10485760 00:04:30.470 buf 0x2000005fffc0 len 8388608 PASSED 00:04:30.470 free 0x2000005fffc0 8388608 00:04:30.470 unregister 0x200000400000 10485760 PASSED 00:04:30.470 passed 00:04:30.470 00:04:30.470 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.470 suites 1 1 n/a 0 0 00:04:30.470 tests 1 1 1 0 0 00:04:30.470 asserts 15 15 15 0 n/a 00:04:30.470 00:04:30.470 Elapsed time = 0.039 seconds 00:04:30.470 00:04:30.470 real 0m0.205s 00:04:30.470 user 0m0.051s 00:04:30.470 sys 0m0.053s 00:04:30.470 11:47:27 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.470 11:47:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:30.470 ************************************ 00:04:30.470 END TEST env_mem_callbacks 00:04:30.470 ************************************ 00:04:30.470 ************************************ 00:04:30.470 END TEST env 00:04:30.470 ************************************ 00:04:30.470 00:04:30.470 real 0m6.212s 00:04:30.470 user 0m4.846s 00:04:30.470 sys 0m1.014s 00:04:30.470 11:47:28 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.470 11:47:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.470 11:47:28 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.470 11:47:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.470 11:47:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.470 11:47:28 -- common/autotest_common.sh@10 -- # set +x 00:04:30.470 ************************************ 00:04:30.470 START TEST rpc 00:04:30.470 ************************************ 00:04:30.470 11:47:28 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.470 * Looking for test storage... 00:04:30.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.470 11:47:28 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:30.470 11:47:28 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:30.470 11:47:28 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:30.470 11:47:28 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:30.470 11:47:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.470 11:47:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.470 11:47:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.470 11:47:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.470 11:47:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.470 11:47:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.470 11:47:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.470 11:47:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.470 11:47:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.470 11:47:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.470 11:47:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.470 11:47:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.470 11:47:28 rpc -- scripts/common.sh@345 -- # : 1 00:04:30.470 11:47:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.470 11:47:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.470 11:47:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.470 11:47:28 rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.470 11:47:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.470 11:47:28 rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.470 11:47:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.729 11:47:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.729 11:47:28 rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.729 11:47:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.729 11:47:28 rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.729 11:47:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.729 11:47:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.729 11:47:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.729 11:47:28 rpc -- scripts/common.sh@368 -- # return 0 00:04:30.729 11:47:28 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.729 11:47:28 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:30.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.729 --rc genhtml_branch_coverage=1 00:04:30.729 --rc genhtml_function_coverage=1 00:04:30.729 --rc genhtml_legend=1 00:04:30.729 --rc geninfo_all_blocks=1 00:04:30.729 --rc geninfo_unexecuted_blocks=1 00:04:30.729 00:04:30.729 ' 00:04:30.729 11:47:28 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:30.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.729 --rc genhtml_branch_coverage=1 00:04:30.729 --rc genhtml_function_coverage=1 00:04:30.729 --rc genhtml_legend=1 00:04:30.729 --rc geninfo_all_blocks=1 00:04:30.729 --rc geninfo_unexecuted_blocks=1 00:04:30.729 00:04:30.729 ' 00:04:30.729 11:47:28 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:30.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.730 --rc genhtml_branch_coverage=1 00:04:30.730 --rc genhtml_function_coverage=1 00:04:30.730 --rc genhtml_legend=1 00:04:30.730 --rc geninfo_all_blocks=1 00:04:30.730 --rc geninfo_unexecuted_blocks=1 00:04:30.730 00:04:30.730 ' 00:04:30.730 11:47:28 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:30.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.730 --rc genhtml_branch_coverage=1 00:04:30.730 --rc genhtml_function_coverage=1 00:04:30.730 --rc genhtml_legend=1 00:04:30.730 --rc geninfo_all_blocks=1 00:04:30.730 --rc geninfo_unexecuted_blocks=1 00:04:30.730 00:04:30.730 ' 00:04:30.730 11:47:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57133 00:04:30.730 11:47:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.730 11:47:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57133 00:04:30.730 11:47:28 rpc -- common/autotest_common.sh@833 -- # '[' -z 57133 ']' 00:04:30.730 11:47:28 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.730 11:47:28 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:30.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.730 11:47:28 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.730 11:47:28 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:30.730 11:47:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.730 11:47:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:30.730 [2024-11-18 11:47:28.256511] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:30.730 [2024-11-18 11:47:28.256643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57133 ] 00:04:30.730 [2024-11-18 11:47:28.406466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.988 [2024-11-18 11:47:28.483175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.988 [2024-11-18 11:47:28.483221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57133' to capture a snapshot of events at runtime. 00:04:30.988 [2024-11-18 11:47:28.483229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.988 [2024-11-18 11:47:28.483236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.988 [2024-11-18 11:47:28.483242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57133 for offline analysis/debug. 00:04:30.988 [2024-11-18 11:47:28.483906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.554 11:47:29 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:31.554 11:47:29 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:31.554 11:47:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.554 11:47:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.554 11:47:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:31.554 11:47:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:31.554 11:47:29 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.554 11:47:29 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.554 11:47:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.554 ************************************ 00:04:31.554 START TEST rpc_integrity 00:04:31.554 ************************************ 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.554 { 00:04:31.554 "name": "Malloc0", 00:04:31.554 "aliases": [ 00:04:31.554 "0291ecfb-f8c1-419b-8319-fcd36e81508d" 00:04:31.554 ], 00:04:31.554 "product_name": "Malloc disk", 00:04:31.554 "block_size": 512, 00:04:31.554 "num_blocks": 16384, 00:04:31.554 "uuid": "0291ecfb-f8c1-419b-8319-fcd36e81508d", 00:04:31.554 "assigned_rate_limits": { 00:04:31.554 "rw_ios_per_sec": 0, 00:04:31.554 "rw_mbytes_per_sec": 0, 00:04:31.554 "r_mbytes_per_sec": 0, 00:04:31.554 "w_mbytes_per_sec": 0 00:04:31.554 }, 00:04:31.554 "claimed": false, 00:04:31.554 "zoned": false, 00:04:31.554 "supported_io_types": { 00:04:31.554 "read": true, 00:04:31.554 "write": true, 00:04:31.554 "unmap": true, 00:04:31.554 "flush": true, 00:04:31.554 "reset": true, 00:04:31.554 "nvme_admin": false, 00:04:31.554 "nvme_io": false, 00:04:31.554 "nvme_io_md": false, 00:04:31.554 "write_zeroes": true, 00:04:31.554 "zcopy": true, 00:04:31.554 "get_zone_info": false, 00:04:31.554 "zone_management": false, 00:04:31.554 "zone_append": false, 00:04:31.554 "compare": false, 00:04:31.554 "compare_and_write": false, 00:04:31.554 "abort": true, 00:04:31.554 "seek_hole": false, 00:04:31.554 "seek_data": false, 00:04:31.554 "copy": true, 00:04:31.554 "nvme_iov_md": false 00:04:31.554 }, 00:04:31.554 "memory_domains": [ 00:04:31.554 { 00:04:31.554 "dma_device_id": "system", 00:04:31.554 "dma_device_type": 1 00:04:31.554 }, 00:04:31.554 { 00:04:31.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.554 "dma_device_type": 2 00:04:31.554 } 00:04:31.554 ], 00:04:31.554 "driver_specific": {} 00:04:31.554 } 00:04:31.554 ]' 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.554 [2024-11-18 11:47:29.153713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:31.554 [2024-11-18 11:47:29.153763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.554 [2024-11-18 11:47:29.153784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:31.554 [2024-11-18 11:47:29.153794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.554 [2024-11-18 11:47:29.155612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.554 [2024-11-18 11:47:29.155648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.554 Passthru0 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.554 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.554 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.555 { 00:04:31.555 "name": "Malloc0", 00:04:31.555 "aliases": [ 00:04:31.555 "0291ecfb-f8c1-419b-8319-fcd36e81508d" 00:04:31.555 ], 00:04:31.555 "product_name": "Malloc disk", 00:04:31.555 "block_size": 512, 00:04:31.555 "num_blocks": 16384, 00:04:31.555 "uuid": "0291ecfb-f8c1-419b-8319-fcd36e81508d", 00:04:31.555 "assigned_rate_limits": { 00:04:31.555 "rw_ios_per_sec": 0, 00:04:31.555 "rw_mbytes_per_sec": 0, 00:04:31.555 "r_mbytes_per_sec": 0, 00:04:31.555 "w_mbytes_per_sec": 0 00:04:31.555 }, 00:04:31.555 "claimed": true, 00:04:31.555 "claim_type": "exclusive_write", 00:04:31.555 "zoned": false, 00:04:31.555 "supported_io_types": { 00:04:31.555 "read": true, 00:04:31.555 "write": true, 00:04:31.555 "unmap": true, 00:04:31.555 "flush": true, 00:04:31.555 "reset": true, 00:04:31.555 "nvme_admin": false, 00:04:31.555 "nvme_io": false, 00:04:31.555 "nvme_io_md": false, 00:04:31.555 "write_zeroes": true, 00:04:31.555 "zcopy": true, 00:04:31.555 "get_zone_info": false, 00:04:31.555 "zone_management": false, 00:04:31.555 "zone_append": false, 00:04:31.555 "compare": false, 00:04:31.555 "compare_and_write": false, 00:04:31.555 "abort": true, 00:04:31.555 "seek_hole": false, 00:04:31.555 "seek_data": false, 00:04:31.555 "copy": true, 00:04:31.555 "nvme_iov_md": false 00:04:31.555 }, 00:04:31.555 "memory_domains": [ 00:04:31.555 { 00:04:31.555 "dma_device_id": "system", 00:04:31.555 "dma_device_type": 1 00:04:31.555 }, 00:04:31.555 { 00:04:31.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.555 "dma_device_type": 2 00:04:31.555 } 00:04:31.555 ], 00:04:31.555 "driver_specific": {} 00:04:31.555 }, 00:04:31.555 { 00:04:31.555 "name": "Passthru0", 00:04:31.555 "aliases": [ 00:04:31.555 "904f1438-7594-5e3a-8ffe-5b31c327583b" 00:04:31.555 ], 00:04:31.555 "product_name": "passthru", 00:04:31.555 "block_size": 512, 00:04:31.555 "num_blocks": 16384, 00:04:31.555 "uuid": "904f1438-7594-5e3a-8ffe-5b31c327583b", 00:04:31.555 "assigned_rate_limits": { 00:04:31.555 "rw_ios_per_sec": 0, 00:04:31.555 "rw_mbytes_per_sec": 0, 00:04:31.555 "r_mbytes_per_sec": 0, 00:04:31.555 "w_mbytes_per_sec": 0 00:04:31.555 }, 00:04:31.555 "claimed": false, 00:04:31.555 "zoned": false, 00:04:31.555 "supported_io_types": { 00:04:31.555 "read": true, 00:04:31.555 "write": true, 00:04:31.555 "unmap": true, 00:04:31.555 "flush": true, 00:04:31.555 "reset": true, 00:04:31.555 "nvme_admin": false, 00:04:31.555 "nvme_io": false, 00:04:31.555 "nvme_io_md": false, 00:04:31.555 "write_zeroes": true, 00:04:31.555 "zcopy": true, 00:04:31.555 "get_zone_info": false, 00:04:31.555 "zone_management": false, 00:04:31.555 "zone_append": false, 00:04:31.555 "compare": false, 00:04:31.555 "compare_and_write": false, 00:04:31.555 "abort": true, 00:04:31.555 "seek_hole": false, 00:04:31.555 "seek_data": false, 00:04:31.555 "copy": true, 00:04:31.555 "nvme_iov_md": false 00:04:31.555 }, 00:04:31.555 "memory_domains": [ 00:04:31.555 { 00:04:31.555 "dma_device_id": "system", 00:04:31.555 "dma_device_type": 1 00:04:31.555 }, 00:04:31.555 { 00:04:31.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.555 "dma_device_type": 2 00:04:31.555 } 00:04:31.555 ], 00:04:31.555 "driver_specific": { 00:04:31.555 "passthru": { 00:04:31.555 "name": "Passthru0", 00:04:31.555 "base_bdev_name": "Malloc0" 00:04:31.555 } 00:04:31.555 } 00:04:31.555 } 00:04:31.555 ]' 00:04:31.555 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.555 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.555 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.555 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.555 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.555 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.555 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.555 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.813 11:47:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.813 00:04:31.813 real 0m0.238s 00:04:31.813 user 0m0.126s 00:04:31.813 sys 0m0.037s 00:04:31.813 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.813 11:47:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.813 ************************************ 00:04:31.813 END TEST rpc_integrity 00:04:31.813 ************************************ 00:04:31.813 11:47:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:31.813 11:47:29 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.813 11:47:29 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.813 11:47:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.813 ************************************ 00:04:31.813 START TEST rpc_plugins 00:04:31.813 ************************************ 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:31.813 { 00:04:31.813 "name": "Malloc1", 00:04:31.813 "aliases": [ 00:04:31.813 "4e1d2779-1fbd-401b-b22c-ac41ffc84e28" 00:04:31.813 ], 00:04:31.813 "product_name": "Malloc disk", 00:04:31.813 "block_size": 4096, 00:04:31.813 "num_blocks": 256, 00:04:31.813 "uuid": "4e1d2779-1fbd-401b-b22c-ac41ffc84e28", 00:04:31.813 "assigned_rate_limits": { 00:04:31.813 "rw_ios_per_sec": 0, 00:04:31.813 "rw_mbytes_per_sec": 0, 00:04:31.813 "r_mbytes_per_sec": 0, 00:04:31.813 "w_mbytes_per_sec": 0 00:04:31.813 }, 00:04:31.813 "claimed": false, 00:04:31.813 "zoned": false, 00:04:31.813 "supported_io_types": { 00:04:31.813 "read": true, 00:04:31.813 "write": true, 00:04:31.813 "unmap": true, 00:04:31.813 "flush": true, 00:04:31.813 "reset": true, 00:04:31.813 "nvme_admin": false, 00:04:31.813 "nvme_io": false, 00:04:31.813 "nvme_io_md": false, 00:04:31.813 "write_zeroes": true, 00:04:31.813 "zcopy": true, 00:04:31.813 "get_zone_info": false, 00:04:31.813 "zone_management": false, 00:04:31.813 "zone_append": false, 00:04:31.813 "compare": false, 00:04:31.813 "compare_and_write": false, 00:04:31.813 "abort": true, 00:04:31.813 "seek_hole": false, 00:04:31.813 "seek_data": false, 00:04:31.813 "copy": true, 00:04:31.813 "nvme_iov_md": false 00:04:31.813 }, 00:04:31.813 "memory_domains": [ 00:04:31.813 { 00:04:31.813 "dma_device_id": "system", 00:04:31.813 "dma_device_type": 1 00:04:31.813 }, 00:04:31.813 { 00:04:31.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.813 "dma_device_type": 2 00:04:31.813 } 00:04:31.813 ], 00:04:31.813 "driver_specific": {} 00:04:31.813 } 00:04:31.813 ]' 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.813 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:31.813 11:47:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:31.813 00:04:31.814 real 0m0.111s 00:04:31.814 user 0m0.056s 00:04:31.814 sys 0m0.018s 00:04:31.814 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.814 ************************************ 00:04:31.814 END TEST rpc_plugins 00:04:31.814 ************************************ 00:04:31.814 11:47:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.814 11:47:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:31.814 11:47:29 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.814 11:47:29 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.814 11:47:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.814 ************************************ 00:04:31.814 START TEST rpc_trace_cmd_test 00:04:31.814 ************************************ 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:31.814 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57133", 00:04:31.814 "tpoint_group_mask": "0x8", 00:04:31.814 "iscsi_conn": { 00:04:31.814 "mask": "0x2", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "scsi": { 00:04:31.814 "mask": "0x4", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "bdev": { 00:04:31.814 "mask": "0x8", 00:04:31.814 "tpoint_mask": "0xffffffffffffffff" 00:04:31.814 }, 00:04:31.814 "nvmf_rdma": { 00:04:31.814 "mask": "0x10", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "nvmf_tcp": { 00:04:31.814 "mask": "0x20", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "ftl": { 00:04:31.814 "mask": "0x40", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "blobfs": { 00:04:31.814 "mask": "0x80", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "dsa": { 00:04:31.814 "mask": "0x200", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "thread": { 00:04:31.814 "mask": "0x400", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "nvme_pcie": { 00:04:31.814 "mask": "0x800", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "iaa": { 00:04:31.814 "mask": "0x1000", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "nvme_tcp": { 00:04:31.814 "mask": "0x2000", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "bdev_nvme": { 00:04:31.814 "mask": "0x4000", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "sock": { 00:04:31.814 "mask": "0x8000", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "blob": { 00:04:31.814 "mask": "0x10000", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "bdev_raid": { 00:04:31.814 "mask": "0x20000", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 }, 00:04:31.814 "scheduler": { 00:04:31.814 "mask": "0x40000", 00:04:31.814 "tpoint_mask": "0x0" 00:04:31.814 } 00:04:31.814 }' 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:31.814 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.073 00:04:32.073 real 0m0.167s 00:04:32.073 user 0m0.135s 00:04:32.073 sys 0m0.023s 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.073 11:47:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.073 ************************************ 00:04:32.073 END TEST rpc_trace_cmd_test 00:04:32.073 ************************************ 00:04:32.073 11:47:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.073 11:47:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.073 11:47:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.073 11:47:29 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.073 11:47:29 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.073 11:47:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.073 ************************************ 00:04:32.073 START TEST rpc_daemon_integrity 00:04:32.073 ************************************ 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.073 { 00:04:32.073 "name": "Malloc2", 00:04:32.073 "aliases": [ 00:04:32.073 "c888be5b-0624-4897-b783-7c6766931fc3" 00:04:32.073 ], 00:04:32.073 "product_name": "Malloc disk", 00:04:32.073 "block_size": 512, 00:04:32.073 "num_blocks": 16384, 00:04:32.073 "uuid": "c888be5b-0624-4897-b783-7c6766931fc3", 00:04:32.073 "assigned_rate_limits": { 00:04:32.073 "rw_ios_per_sec": 0, 00:04:32.073 "rw_mbytes_per_sec": 0, 00:04:32.073 "r_mbytes_per_sec": 0, 00:04:32.073 "w_mbytes_per_sec": 0 00:04:32.073 }, 00:04:32.073 "claimed": false, 00:04:32.073 "zoned": false, 00:04:32.073 "supported_io_types": { 00:04:32.073 "read": true, 00:04:32.073 "write": true, 00:04:32.073 "unmap": true, 00:04:32.073 "flush": true, 00:04:32.073 "reset": true, 00:04:32.073 "nvme_admin": false, 00:04:32.073 "nvme_io": false, 00:04:32.073 "nvme_io_md": false, 00:04:32.073 "write_zeroes": true, 00:04:32.073 "zcopy": true, 00:04:32.073 "get_zone_info": false, 00:04:32.073 "zone_management": false, 00:04:32.073 "zone_append": false, 00:04:32.073 "compare": false, 00:04:32.073 "compare_and_write": false, 00:04:32.073 "abort": true, 00:04:32.073 "seek_hole": false, 00:04:32.073 "seek_data": false, 00:04:32.073 "copy": true, 00:04:32.073 "nvme_iov_md": false 00:04:32.073 }, 00:04:32.073 "memory_domains": [ 00:04:32.073 { 00:04:32.073 "dma_device_id": "system", 00:04:32.073 "dma_device_type": 1 00:04:32.073 }, 00:04:32.073 { 00:04:32.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.073 "dma_device_type": 2 00:04:32.073 } 00:04:32.073 ], 00:04:32.073 "driver_specific": {} 00:04:32.073 } 00:04:32.073 ]' 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.073 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.332 [2024-11-18 11:47:29.774155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:32.332 [2024-11-18 11:47:29.774199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.332 [2024-11-18 11:47:29.774216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:32.332 [2024-11-18 11:47:29.774225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.332 [2024-11-18 11:47:29.776003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.332 [2024-11-18 11:47:29.776035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.332 Passthru0 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.332 { 00:04:32.332 "name": "Malloc2", 00:04:32.332 "aliases": [ 00:04:32.332 "c888be5b-0624-4897-b783-7c6766931fc3" 00:04:32.332 ], 00:04:32.332 "product_name": "Malloc disk", 00:04:32.332 "block_size": 512, 00:04:32.332 "num_blocks": 16384, 00:04:32.332 "uuid": "c888be5b-0624-4897-b783-7c6766931fc3", 00:04:32.332 "assigned_rate_limits": { 00:04:32.332 "rw_ios_per_sec": 0, 00:04:32.332 "rw_mbytes_per_sec": 0, 00:04:32.332 "r_mbytes_per_sec": 0, 00:04:32.332 "w_mbytes_per_sec": 0 00:04:32.332 }, 00:04:32.332 "claimed": true, 00:04:32.332 "claim_type": "exclusive_write", 00:04:32.332 "zoned": false, 00:04:32.332 "supported_io_types": { 00:04:32.332 "read": true, 00:04:32.332 "write": true, 00:04:32.332 "unmap": true, 00:04:32.332 "flush": true, 00:04:32.332 "reset": true, 00:04:32.332 "nvme_admin": false, 00:04:32.332 "nvme_io": false, 00:04:32.332 "nvme_io_md": false, 00:04:32.332 "write_zeroes": true, 00:04:32.332 "zcopy": true, 00:04:32.332 "get_zone_info": false, 00:04:32.332 "zone_management": false, 00:04:32.332 "zone_append": false, 00:04:32.332 "compare": false, 00:04:32.332 "compare_and_write": false, 00:04:32.332 "abort": true, 00:04:32.332 "seek_hole": false, 00:04:32.332 "seek_data": false, 00:04:32.332 "copy": true, 00:04:32.332 "nvme_iov_md": false 00:04:32.332 }, 00:04:32.332 "memory_domains": [ 00:04:32.332 { 00:04:32.332 "dma_device_id": "system", 00:04:32.332 "dma_device_type": 1 00:04:32.332 }, 00:04:32.332 { 00:04:32.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.332 "dma_device_type": 2 00:04:32.332 } 00:04:32.332 ], 00:04:32.332 "driver_specific": {} 00:04:32.332 }, 00:04:32.332 { 00:04:32.332 "name": "Passthru0", 00:04:32.332 "aliases": [ 00:04:32.332 "75cccf71-d433-570c-8d06-76fe274928d3" 00:04:32.332 ], 00:04:32.332 "product_name": "passthru", 00:04:32.332 "block_size": 512, 00:04:32.332 "num_blocks": 16384, 00:04:32.332 "uuid": "75cccf71-d433-570c-8d06-76fe274928d3", 00:04:32.332 "assigned_rate_limits": { 00:04:32.332 "rw_ios_per_sec": 0, 00:04:32.332 "rw_mbytes_per_sec": 0, 00:04:32.332 "r_mbytes_per_sec": 0, 00:04:32.332 "w_mbytes_per_sec": 0 00:04:32.332 }, 00:04:32.332 "claimed": false, 00:04:32.332 "zoned": false, 00:04:32.332 "supported_io_types": { 00:04:32.332 "read": true, 00:04:32.332 "write": true, 00:04:32.332 "unmap": true, 00:04:32.332 "flush": true, 00:04:32.332 "reset": true, 00:04:32.332 "nvme_admin": false, 00:04:32.332 "nvme_io": false, 00:04:32.332 "nvme_io_md": false, 00:04:32.332 "write_zeroes": true, 00:04:32.332 "zcopy": true, 00:04:32.332 "get_zone_info": false, 00:04:32.332 "zone_management": false, 00:04:32.332 "zone_append": false, 00:04:32.332 "compare": false, 00:04:32.332 "compare_and_write": false, 00:04:32.332 "abort": true, 00:04:32.332 "seek_hole": false, 00:04:32.332 "seek_data": false, 00:04:32.332 "copy": true, 00:04:32.332 "nvme_iov_md": false 00:04:32.332 }, 00:04:32.332 "memory_domains": [ 00:04:32.332 { 00:04:32.332 "dma_device_id": "system", 00:04:32.332 "dma_device_type": 1 00:04:32.332 }, 00:04:32.332 { 00:04:32.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.332 "dma_device_type": 2 00:04:32.332 } 00:04:32.332 ], 00:04:32.332 "driver_specific": { 00:04:32.332 "passthru": { 00:04:32.332 "name": "Passthru0", 00:04:32.332 "base_bdev_name": "Malloc2" 00:04:32.332 } 00:04:32.332 } 00:04:32.332 } 00:04:32.332 ]' 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.332 00:04:32.332 real 0m0.228s 00:04:32.332 user 0m0.123s 00:04:32.332 sys 0m0.028s 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.332 11:47:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.332 ************************************ 00:04:32.332 END TEST rpc_daemon_integrity 00:04:32.332 ************************************ 00:04:32.332 11:47:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:32.332 11:47:29 rpc -- rpc/rpc.sh@84 -- # killprocess 57133 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@952 -- # '[' -z 57133 ']' 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@956 -- # kill -0 57133 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@957 -- # uname 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57133 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:32.332 killing process with pid 57133 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57133' 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@971 -- # kill 57133 00:04:32.332 11:47:29 rpc -- common/autotest_common.sh@976 -- # wait 57133 00:04:33.705 00:04:33.705 real 0m3.099s 00:04:33.705 user 0m3.474s 00:04:33.705 sys 0m0.574s 00:04:33.705 11:47:31 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:33.705 11:47:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.705 ************************************ 00:04:33.705 END TEST rpc 00:04:33.705 ************************************ 00:04:33.705 11:47:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:33.705 11:47:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:33.705 11:47:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:33.705 11:47:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.705 ************************************ 00:04:33.705 START TEST skip_rpc 00:04:33.705 ************************************ 00:04:33.705 11:47:31 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:33.705 * Looking for test storage... 00:04:33.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.705 11:47:31 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:33.705 11:47:31 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:33.705 11:47:31 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:33.705 11:47:31 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.705 11:47:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:33.705 11:47:31 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.705 11:47:31 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.705 --rc genhtml_branch_coverage=1 00:04:33.705 --rc genhtml_function_coverage=1 00:04:33.705 --rc genhtml_legend=1 00:04:33.705 --rc geninfo_all_blocks=1 00:04:33.706 --rc geninfo_unexecuted_blocks=1 00:04:33.706 00:04:33.706 ' 00:04:33.706 11:47:31 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.706 --rc genhtml_branch_coverage=1 00:04:33.706 --rc genhtml_function_coverage=1 00:04:33.706 --rc genhtml_legend=1 00:04:33.706 --rc geninfo_all_blocks=1 00:04:33.706 --rc geninfo_unexecuted_blocks=1 00:04:33.706 00:04:33.706 ' 00:04:33.706 11:47:31 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.706 --rc genhtml_branch_coverage=1 00:04:33.706 --rc genhtml_function_coverage=1 00:04:33.706 --rc genhtml_legend=1 00:04:33.706 --rc geninfo_all_blocks=1 00:04:33.706 --rc geninfo_unexecuted_blocks=1 00:04:33.706 00:04:33.706 ' 00:04:33.706 11:47:31 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:33.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.706 --rc genhtml_branch_coverage=1 00:04:33.706 --rc genhtml_function_coverage=1 00:04:33.706 --rc genhtml_legend=1 00:04:33.706 --rc geninfo_all_blocks=1 00:04:33.706 --rc geninfo_unexecuted_blocks=1 00:04:33.706 00:04:33.706 ' 00:04:33.706 11:47:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.706 11:47:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:33.706 11:47:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:33.706 11:47:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:33.706 11:47:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:33.706 11:47:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.706 ************************************ 00:04:33.706 START TEST skip_rpc 00:04:33.706 ************************************ 00:04:33.706 11:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:33.706 11:47:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57340 00:04:33.706 11:47:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.706 11:47:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:33.706 11:47:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:33.706 [2024-11-18 11:47:31.396501] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:33.706 [2024-11-18 11:47:31.396637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57340 ] 00:04:33.964 [2024-11-18 11:47:31.542998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.964 [2024-11-18 11:47:31.624051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.226 11:47:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:39.226 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:39.226 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:39.226 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:39.226 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.226 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57340 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57340 ']' 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57340 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57340 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:39.227 killing process with pid 57340 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57340' 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57340 00:04:39.227 11:47:36 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57340 00:04:40.601 00:04:40.601 real 0m6.549s 00:04:40.601 user 0m6.177s 00:04:40.601 sys 0m0.261s 00:04:40.601 11:47:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.601 11:47:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.601 ************************************ 00:04:40.601 END TEST skip_rpc 00:04:40.601 ************************************ 00:04:40.601 11:47:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:40.601 11:47:37 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.601 11:47:37 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.601 11:47:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.601 ************************************ 00:04:40.601 START TEST skip_rpc_with_json 00:04:40.601 ************************************ 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57433 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57433 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57433 ']' 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.601 11:47:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.601 [2024-11-18 11:47:37.981241] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:40.601 [2024-11-18 11:47:37.981364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57433 ] 00:04:40.601 [2024-11-18 11:47:38.138204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.601 [2024-11-18 11:47:38.214765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.168 [2024-11-18 11:47:38.817959] nvmf_rpc.c:2868:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:41.168 request: 00:04:41.168 { 00:04:41.168 "trtype": "tcp", 00:04:41.168 "method": "nvmf_get_transports", 00:04:41.168 "req_id": 1 00:04:41.168 } 00:04:41.168 Got JSON-RPC error response 00:04:41.168 response: 00:04:41.168 { 00:04:41.168 "code": -19, 00:04:41.168 "message": "No such device" 00:04:41.168 } 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.168 [2024-11-18 11:47:38.826049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.168 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.426 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.426 11:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.426 { 00:04:41.426 "subsystems": [ 00:04:41.426 { 00:04:41.426 "subsystem": "fsdev", 00:04:41.426 "config": [ 00:04:41.426 { 00:04:41.426 "method": "fsdev_set_opts", 00:04:41.426 "params": { 00:04:41.426 "fsdev_io_pool_size": 65535, 00:04:41.426 "fsdev_io_cache_size": 256 00:04:41.426 } 00:04:41.426 } 00:04:41.426 ] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "keyring", 00:04:41.427 "config": [] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "iobuf", 00:04:41.427 "config": [ 00:04:41.427 { 00:04:41.427 "method": "iobuf_set_options", 00:04:41.427 "params": { 00:04:41.427 "small_pool_count": 8192, 00:04:41.427 "large_pool_count": 1024, 00:04:41.427 "small_bufsize": 8192, 00:04:41.427 "large_bufsize": 135168, 00:04:41.427 "enable_numa": false 00:04:41.427 } 00:04:41.427 } 00:04:41.427 ] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "sock", 00:04:41.427 "config": [ 00:04:41.427 { 00:04:41.427 "method": "sock_set_default_impl", 00:04:41.427 "params": { 00:04:41.427 "impl_name": "posix" 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "sock_impl_set_options", 00:04:41.427 "params": { 00:04:41.427 "impl_name": "ssl", 00:04:41.427 "recv_buf_size": 4096, 00:04:41.427 "send_buf_size": 4096, 00:04:41.427 "enable_recv_pipe": true, 00:04:41.427 "enable_quickack": false, 00:04:41.427 "enable_placement_id": 0, 00:04:41.427 "enable_zerocopy_send_server": true, 00:04:41.427 "enable_zerocopy_send_client": false, 00:04:41.427 "zerocopy_threshold": 0, 00:04:41.427 "tls_version": 0, 00:04:41.427 "enable_ktls": false 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "sock_impl_set_options", 00:04:41.427 "params": { 00:04:41.427 "impl_name": "posix", 00:04:41.427 "recv_buf_size": 2097152, 00:04:41.427 "send_buf_size": 2097152, 00:04:41.427 "enable_recv_pipe": true, 00:04:41.427 "enable_quickack": false, 00:04:41.427 "enable_placement_id": 0, 00:04:41.427 "enable_zerocopy_send_server": true, 00:04:41.427 "enable_zerocopy_send_client": false, 00:04:41.427 "zerocopy_threshold": 0, 00:04:41.427 "tls_version": 0, 00:04:41.427 "enable_ktls": false 00:04:41.427 } 00:04:41.427 } 00:04:41.427 ] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "vmd", 00:04:41.427 "config": [] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "accel", 00:04:41.427 "config": [ 00:04:41.427 { 00:04:41.427 "method": "accel_set_options", 00:04:41.427 "params": { 00:04:41.427 "small_cache_size": 128, 00:04:41.427 "large_cache_size": 16, 00:04:41.427 "task_count": 2048, 00:04:41.427 "sequence_count": 2048, 00:04:41.427 "buf_count": 2048 00:04:41.427 } 00:04:41.427 } 00:04:41.427 ] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "bdev", 00:04:41.427 "config": [ 00:04:41.427 { 00:04:41.427 "method": "bdev_set_options", 00:04:41.427 "params": { 00:04:41.427 "bdev_io_pool_size": 65535, 00:04:41.427 "bdev_io_cache_size": 256, 00:04:41.427 "bdev_auto_examine": true, 00:04:41.427 "iobuf_small_cache_size": 128, 00:04:41.427 "iobuf_large_cache_size": 16 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "bdev_raid_set_options", 00:04:41.427 "params": { 00:04:41.427 "process_window_size_kb": 1024, 00:04:41.427 "process_max_bandwidth_mb_sec": 0 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "bdev_iscsi_set_options", 00:04:41.427 "params": { 00:04:41.427 "timeout_sec": 30 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "bdev_nvme_set_options", 00:04:41.427 "params": { 00:04:41.427 "action_on_timeout": "none", 00:04:41.427 "timeout_us": 0, 00:04:41.427 "timeout_admin_us": 0, 00:04:41.427 "keep_alive_timeout_ms": 10000, 00:04:41.427 "arbitration_burst": 0, 00:04:41.427 "low_priority_weight": 0, 00:04:41.427 "medium_priority_weight": 0, 00:04:41.427 "high_priority_weight": 0, 00:04:41.427 "nvme_adminq_poll_period_us": 10000, 00:04:41.427 "nvme_ioq_poll_period_us": 0, 00:04:41.427 "io_queue_requests": 0, 00:04:41.427 "delay_cmd_submit": true, 00:04:41.427 "transport_retry_count": 4, 00:04:41.427 "bdev_retry_count": 3, 00:04:41.427 "transport_ack_timeout": 0, 00:04:41.427 "ctrlr_loss_timeout_sec": 0, 00:04:41.427 "reconnect_delay_sec": 0, 00:04:41.427 "fast_io_fail_timeout_sec": 0, 00:04:41.427 "disable_auto_failback": false, 00:04:41.427 "generate_uuids": false, 00:04:41.427 "transport_tos": 0, 00:04:41.427 "nvme_error_stat": false, 00:04:41.427 "rdma_srq_size": 0, 00:04:41.427 "io_path_stat": false, 00:04:41.427 "allow_accel_sequence": false, 00:04:41.427 "rdma_max_cq_size": 0, 00:04:41.427 "rdma_cm_event_timeout_ms": 0, 00:04:41.427 "dhchap_digests": [ 00:04:41.427 "sha256", 00:04:41.427 "sha384", 00:04:41.427 "sha512" 00:04:41.427 ], 00:04:41.427 "dhchap_dhgroups": [ 00:04:41.427 "null", 00:04:41.427 "ffdhe2048", 00:04:41.427 "ffdhe3072", 00:04:41.427 "ffdhe4096", 00:04:41.427 "ffdhe6144", 00:04:41.427 "ffdhe8192" 00:04:41.427 ] 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "bdev_nvme_set_hotplug", 00:04:41.427 "params": { 00:04:41.427 "period_us": 100000, 00:04:41.427 "enable": false 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "bdev_wait_for_examine" 00:04:41.427 } 00:04:41.427 ] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "scsi", 00:04:41.427 "config": null 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "scheduler", 00:04:41.427 "config": [ 00:04:41.427 { 00:04:41.427 "method": "framework_set_scheduler", 00:04:41.427 "params": { 00:04:41.427 "name": "static" 00:04:41.427 } 00:04:41.427 } 00:04:41.427 ] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "vhost_scsi", 00:04:41.427 "config": [] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "vhost_blk", 00:04:41.427 "config": [] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "ublk", 00:04:41.427 "config": [] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "nbd", 00:04:41.427 "config": [] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "nvmf", 00:04:41.427 "config": [ 00:04:41.427 { 00:04:41.427 "method": "nvmf_set_config", 00:04:41.427 "params": { 00:04:41.427 "discovery_filter": "match_any", 00:04:41.427 "admin_cmd_passthru": { 00:04:41.427 "identify_ctrlr": false 00:04:41.427 }, 00:04:41.427 "dhchap_digests": [ 00:04:41.427 "sha256", 00:04:41.427 "sha384", 00:04:41.427 "sha512" 00:04:41.427 ], 00:04:41.427 "dhchap_dhgroups": [ 00:04:41.427 "null", 00:04:41.427 "ffdhe2048", 00:04:41.427 "ffdhe3072", 00:04:41.427 "ffdhe4096", 00:04:41.427 "ffdhe6144", 00:04:41.427 "ffdhe8192" 00:04:41.427 ] 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "nvmf_set_max_subsystems", 00:04:41.427 "params": { 00:04:41.427 "max_subsystems": 1024 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "nvmf_set_crdt", 00:04:41.427 "params": { 00:04:41.427 "crdt1": 0, 00:04:41.427 "crdt2": 0, 00:04:41.427 "crdt3": 0 00:04:41.427 } 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "method": "nvmf_create_transport", 00:04:41.427 "params": { 00:04:41.427 "trtype": "TCP", 00:04:41.427 "max_queue_depth": 128, 00:04:41.427 "max_io_qpairs_per_ctrlr": 127, 00:04:41.427 "in_capsule_data_size": 4096, 00:04:41.427 "max_io_size": 131072, 00:04:41.427 "io_unit_size": 131072, 00:04:41.427 "max_aq_depth": 128, 00:04:41.427 "num_shared_buffers": 511, 00:04:41.427 "buf_cache_size": 4294967295, 00:04:41.427 "dif_insert_or_strip": false, 00:04:41.427 "zcopy": false, 00:04:41.427 "c2h_success": true, 00:04:41.427 "sock_priority": 0, 00:04:41.427 "abort_timeout_sec": 1, 00:04:41.427 "ack_timeout": 0, 00:04:41.427 "data_wr_pool_size": 0 00:04:41.427 } 00:04:41.427 } 00:04:41.427 ] 00:04:41.427 }, 00:04:41.427 { 00:04:41.427 "subsystem": "iscsi", 00:04:41.427 "config": [ 00:04:41.427 { 00:04:41.427 "method": "iscsi_set_options", 00:04:41.427 "params": { 00:04:41.427 "node_base": "iqn.2016-06.io.spdk", 00:04:41.427 "max_sessions": 128, 00:04:41.427 "max_connections_per_session": 2, 00:04:41.427 "max_queue_depth": 64, 00:04:41.427 "default_time2wait": 2, 00:04:41.427 "default_time2retain": 20, 00:04:41.427 "first_burst_length": 8192, 00:04:41.427 "immediate_data": true, 00:04:41.427 "allow_duplicated_isid": false, 00:04:41.427 "error_recovery_level": 0, 00:04:41.427 "nop_timeout": 60, 00:04:41.427 "nop_in_interval": 30, 00:04:41.427 "disable_chap": false, 00:04:41.427 "require_chap": false, 00:04:41.427 "mutual_chap": false, 00:04:41.427 "chap_group": 0, 00:04:41.427 "max_large_datain_per_connection": 64, 00:04:41.427 "max_r2t_per_connection": 4, 00:04:41.427 "pdu_pool_size": 36864, 00:04:41.427 "immediate_data_pool_size": 16384, 00:04:41.427 "data_out_pool_size": 2048 00:04:41.427 } 00:04:41.427 } 00:04:41.427 ] 00:04:41.427 } 00:04:41.427 ] 00:04:41.427 } 00:04:41.427 11:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:41.427 11:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57433 00:04:41.427 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57433 ']' 00:04:41.427 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57433 00:04:41.427 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:41.427 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.427 11:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57433 00:04:41.427 killing process with pid 57433 00:04:41.427 11:47:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.427 11:47:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.427 11:47:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57433' 00:04:41.427 11:47:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57433 00:04:41.427 11:47:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57433 00:04:42.800 11:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57473 00:04:42.800 11:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:42.800 11:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57473 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57473 ']' 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57473 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57473 00:04:48.130 killing process with pid 57473 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57473' 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57473 00:04:48.130 11:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57473 00:04:48.696 11:47:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.696 11:47:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.696 ************************************ 00:04:48.696 END TEST skip_rpc_with_json 00:04:48.696 ************************************ 00:04:48.696 00:04:48.696 real 0m8.471s 00:04:48.696 user 0m8.138s 00:04:48.696 sys 0m0.552s 00:04:48.696 11:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.696 11:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.955 11:47:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:48.955 11:47:46 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.955 11:47:46 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.955 11:47:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.955 ************************************ 00:04:48.955 START TEST skip_rpc_with_delay 00:04:48.955 ************************************ 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.955 [2024-11-18 11:47:46.487856] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:48.955 ************************************ 00:04:48.955 END TEST skip_rpc_with_delay 00:04:48.955 ************************************ 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.955 00:04:48.955 real 0m0.107s 00:04:48.955 user 0m0.058s 00:04:48.955 sys 0m0.048s 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.955 11:47:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:48.955 11:47:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:48.955 11:47:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:48.955 11:47:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:48.955 11:47:46 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.955 11:47:46 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.955 11:47:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.955 ************************************ 00:04:48.955 START TEST exit_on_failed_rpc_init 00:04:48.955 ************************************ 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57590 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57590 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57590 ']' 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.955 11:47:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.955 [2024-11-18 11:47:46.646919] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:48.955 [2024-11-18 11:47:46.647357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57590 ] 00:04:49.214 [2024-11-18 11:47:46.796880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.214 [2024-11-18 11:47:46.876305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:49.781 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.038 [2024-11-18 11:47:47.514670] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:50.038 [2024-11-18 11:47:47.514789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57608 ] 00:04:50.038 [2024-11-18 11:47:47.668578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.297 [2024-11-18 11:47:47.768755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.297 [2024-11-18 11:47:47.768836] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.297 [2024-11-18 11:47:47.768851] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.297 [2024-11-18 11:47:47.768862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57590 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57590 ']' 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57590 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57590 00:04:50.297 killing process with pid 57590 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57590' 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57590 00:04:50.297 11:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57590 00:04:51.674 00:04:51.674 real 0m2.585s 00:04:51.674 user 0m2.862s 00:04:51.674 sys 0m0.383s 00:04:51.674 11:47:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.674 11:47:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.674 ************************************ 00:04:51.674 END TEST exit_on_failed_rpc_init 00:04:51.674 ************************************ 00:04:51.674 11:47:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.674 ************************************ 00:04:51.674 END TEST skip_rpc 00:04:51.674 ************************************ 00:04:51.674 00:04:51.674 real 0m18.018s 00:04:51.674 user 0m17.373s 00:04:51.674 sys 0m1.413s 00:04:51.674 11:47:49 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.674 11:47:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.674 11:47:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.674 11:47:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.674 11:47:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.674 11:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:51.674 ************************************ 00:04:51.674 START TEST rpc_client 00:04:51.674 ************************************ 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:51.674 * Looking for test storage... 00:04:51.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.674 11:47:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:51.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.674 --rc genhtml_branch_coverage=1 00:04:51.674 --rc genhtml_function_coverage=1 00:04:51.674 --rc genhtml_legend=1 00:04:51.674 --rc geninfo_all_blocks=1 00:04:51.674 --rc geninfo_unexecuted_blocks=1 00:04:51.674 00:04:51.674 ' 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:51.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.674 --rc genhtml_branch_coverage=1 00:04:51.674 --rc genhtml_function_coverage=1 00:04:51.674 --rc genhtml_legend=1 00:04:51.674 --rc geninfo_all_blocks=1 00:04:51.674 --rc geninfo_unexecuted_blocks=1 00:04:51.674 00:04:51.674 ' 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:51.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.674 --rc genhtml_branch_coverage=1 00:04:51.674 --rc genhtml_function_coverage=1 00:04:51.674 --rc genhtml_legend=1 00:04:51.674 --rc geninfo_all_blocks=1 00:04:51.674 --rc geninfo_unexecuted_blocks=1 00:04:51.674 00:04:51.674 ' 00:04:51.674 11:47:49 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:51.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.674 --rc genhtml_branch_coverage=1 00:04:51.674 --rc genhtml_function_coverage=1 00:04:51.674 --rc genhtml_legend=1 00:04:51.674 --rc geninfo_all_blocks=1 00:04:51.674 --rc geninfo_unexecuted_blocks=1 00:04:51.674 00:04:51.674 ' 00:04:51.674 11:47:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:51.933 OK 00:04:51.933 11:47:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.933 00:04:51.933 real 0m0.179s 00:04:51.933 user 0m0.111s 00:04:51.933 sys 0m0.075s 00:04:51.933 11:47:49 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.933 11:47:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.933 ************************************ 00:04:51.933 END TEST rpc_client 00:04:51.933 ************************************ 00:04:51.933 11:47:49 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:51.934 11:47:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.934 11:47:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.934 11:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:51.934 ************************************ 00:04:51.934 START TEST json_config 00:04:51.934 ************************************ 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:51.934 11:47:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.934 11:47:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.934 11:47:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.934 11:47:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.934 11:47:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.934 11:47:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.934 11:47:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.934 11:47:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.934 11:47:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.934 11:47:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.934 11:47:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.934 11:47:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:51.934 11:47:49 json_config -- scripts/common.sh@345 -- # : 1 00:04:51.934 11:47:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.934 11:47:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.934 11:47:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:51.934 11:47:49 json_config -- scripts/common.sh@353 -- # local d=1 00:04:51.934 11:47:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.934 11:47:49 json_config -- scripts/common.sh@355 -- # echo 1 00:04:51.934 11:47:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.934 11:47:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:51.934 11:47:49 json_config -- scripts/common.sh@353 -- # local d=2 00:04:51.934 11:47:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.934 11:47:49 json_config -- scripts/common.sh@355 -- # echo 2 00:04:51.934 11:47:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.934 11:47:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.934 11:47:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.934 11:47:49 json_config -- scripts/common.sh@368 -- # return 0 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:51.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.934 --rc genhtml_branch_coverage=1 00:04:51.934 --rc genhtml_function_coverage=1 00:04:51.934 --rc genhtml_legend=1 00:04:51.934 --rc geninfo_all_blocks=1 00:04:51.934 --rc geninfo_unexecuted_blocks=1 00:04:51.934 00:04:51.934 ' 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:51.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.934 --rc genhtml_branch_coverage=1 00:04:51.934 --rc genhtml_function_coverage=1 00:04:51.934 --rc genhtml_legend=1 00:04:51.934 --rc geninfo_all_blocks=1 00:04:51.934 --rc geninfo_unexecuted_blocks=1 00:04:51.934 00:04:51.934 ' 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:51.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.934 --rc genhtml_branch_coverage=1 00:04:51.934 --rc genhtml_function_coverage=1 00:04:51.934 --rc genhtml_legend=1 00:04:51.934 --rc geninfo_all_blocks=1 00:04:51.934 --rc geninfo_unexecuted_blocks=1 00:04:51.934 00:04:51.934 ' 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:51.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.934 --rc genhtml_branch_coverage=1 00:04:51.934 --rc genhtml_function_coverage=1 00:04:51.934 --rc genhtml_legend=1 00:04:51.934 --rc geninfo_all_blocks=1 00:04:51.934 --rc geninfo_unexecuted_blocks=1 00:04:51.934 00:04:51.934 ' 00:04:51.934 11:47:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cd5049a-d906-44d6-9f42-91ef1a8b3187 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5cd5049a-d906-44d6-9f42-91ef1a8b3187 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.934 11:47:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.934 11:47:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.934 11:47:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.934 11:47:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.934 11:47:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.934 11:47:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.934 11:47:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.934 11:47:49 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.934 11:47:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@51 -- # : 0 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.934 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.934 11:47:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.934 11:47:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.934 11:47:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.934 11:47:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.934 11:47:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.934 11:47:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.934 WARNING: No tests are enabled so not running JSON configuration tests 00:04:51.934 11:47:49 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:51.934 11:47:49 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:51.934 00:04:51.934 real 0m0.130s 00:04:51.934 user 0m0.084s 00:04:51.934 sys 0m0.047s 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.934 11:47:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.934 ************************************ 00:04:51.934 END TEST json_config 00:04:51.934 ************************************ 00:04:51.934 11:47:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.934 11:47:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.934 11:47:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.934 11:47:49 -- common/autotest_common.sh@10 -- # set +x 00:04:51.934 ************************************ 00:04:51.934 START TEST json_config_extra_key 00:04:51.934 ************************************ 00:04:51.934 11:47:49 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:52.193 11:47:49 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.193 11:47:49 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.193 11:47:49 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.193 11:47:49 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:52.193 11:47:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:52.194 11:47:49 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.194 11:47:49 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.194 --rc genhtml_branch_coverage=1 00:04:52.194 --rc genhtml_function_coverage=1 00:04:52.194 --rc genhtml_legend=1 00:04:52.194 --rc geninfo_all_blocks=1 00:04:52.194 --rc geninfo_unexecuted_blocks=1 00:04:52.194 00:04:52.194 ' 00:04:52.194 11:47:49 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.194 --rc genhtml_branch_coverage=1 00:04:52.194 --rc genhtml_function_coverage=1 00:04:52.194 --rc genhtml_legend=1 00:04:52.194 --rc geninfo_all_blocks=1 00:04:52.194 --rc geninfo_unexecuted_blocks=1 00:04:52.194 00:04:52.194 ' 00:04:52.194 11:47:49 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.194 --rc genhtml_branch_coverage=1 00:04:52.194 --rc genhtml_function_coverage=1 00:04:52.194 --rc genhtml_legend=1 00:04:52.194 --rc geninfo_all_blocks=1 00:04:52.194 --rc geninfo_unexecuted_blocks=1 00:04:52.194 00:04:52.194 ' 00:04:52.194 11:47:49 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.194 --rc genhtml_branch_coverage=1 00:04:52.194 --rc genhtml_function_coverage=1 00:04:52.194 --rc genhtml_legend=1 00:04:52.194 --rc geninfo_all_blocks=1 00:04:52.194 --rc geninfo_unexecuted_blocks=1 00:04:52.194 00:04:52.194 ' 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cd5049a-d906-44d6-9f42-91ef1a8b3187 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5cd5049a-d906-44d6-9f42-91ef1a8b3187 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.194 11:47:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.194 11:47:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.194 11:47:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.194 11:47:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.194 11:47:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:52.194 11:47:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.194 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.194 11:47:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:52.194 INFO: launching applications... 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:52.194 11:47:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:52.194 11:47:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:52.194 11:47:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:52.194 11:47:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.194 11:47:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.194 11:47:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.195 11:47:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.195 11:47:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.195 11:47:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57801 00:04:52.195 Waiting for target to run... 00:04:52.195 11:47:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.195 11:47:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57801 /var/tmp/spdk_tgt.sock 00:04:52.195 11:47:49 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57801 ']' 00:04:52.195 11:47:49 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.195 11:47:49 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.195 11:47:49 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.195 11:47:49 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.195 11:47:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.195 11:47:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:52.195 [2024-11-18 11:47:49.790939] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:52.195 [2024-11-18 11:47:49.791152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57801 ] 00:04:52.455 [2024-11-18 11:47:50.095435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.716 [2024-11-18 11:47:50.202567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.285 11:47:50 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.285 00:04:53.285 INFO: shutting down applications... 00:04:53.285 11:47:50 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.285 11:47:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.285 11:47:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57801 ]] 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57801 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57801 00:04:53.285 11:47:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.542 11:47:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.542 11:47:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.542 11:47:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57801 00:04:53.542 11:47:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.113 11:47:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.113 11:47:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.113 11:47:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57801 00:04:54.113 11:47:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.684 11:47:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.684 11:47:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.684 11:47:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57801 00:04:54.684 11:47:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.255 11:47:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.255 11:47:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.255 11:47:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57801 00:04:55.255 11:47:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.255 11:47:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:55.255 11:47:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.255 SPDK target shutdown done 00:04:55.255 Success 00:04:55.255 11:47:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.255 11:47:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:55.255 ************************************ 00:04:55.255 END TEST json_config_extra_key 00:04:55.255 ************************************ 00:04:55.255 00:04:55.255 real 0m3.142s 00:04:55.255 user 0m2.805s 00:04:55.255 sys 0m0.367s 00:04:55.255 11:47:52 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.255 11:47:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.255 11:47:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.255 11:47:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.255 11:47:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.255 11:47:52 -- common/autotest_common.sh@10 -- # set +x 00:04:55.255 ************************************ 00:04:55.255 START TEST alias_rpc 00:04:55.255 ************************************ 00:04:55.255 11:47:52 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.255 * Looking for test storage... 00:04:55.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:55.255 11:47:52 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.255 11:47:52 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.255 11:47:52 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.255 11:47:52 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.255 11:47:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:55.255 11:47:52 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.256 --rc genhtml_branch_coverage=1 00:04:55.256 --rc genhtml_function_coverage=1 00:04:55.256 --rc genhtml_legend=1 00:04:55.256 --rc geninfo_all_blocks=1 00:04:55.256 --rc geninfo_unexecuted_blocks=1 00:04:55.256 00:04:55.256 ' 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.256 --rc genhtml_branch_coverage=1 00:04:55.256 --rc genhtml_function_coverage=1 00:04:55.256 --rc genhtml_legend=1 00:04:55.256 --rc geninfo_all_blocks=1 00:04:55.256 --rc geninfo_unexecuted_blocks=1 00:04:55.256 00:04:55.256 ' 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.256 --rc genhtml_branch_coverage=1 00:04:55.256 --rc genhtml_function_coverage=1 00:04:55.256 --rc genhtml_legend=1 00:04:55.256 --rc geninfo_all_blocks=1 00:04:55.256 --rc geninfo_unexecuted_blocks=1 00:04:55.256 00:04:55.256 ' 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.256 --rc genhtml_branch_coverage=1 00:04:55.256 --rc genhtml_function_coverage=1 00:04:55.256 --rc genhtml_legend=1 00:04:55.256 --rc geninfo_all_blocks=1 00:04:55.256 --rc geninfo_unexecuted_blocks=1 00:04:55.256 00:04:55.256 ' 00:04:55.256 11:47:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:55.256 11:47:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57894 00:04:55.256 11:47:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.256 11:47:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57894 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57894 ']' 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.256 11:47:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.516 [2024-11-18 11:47:53.020425] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:55.516 [2024-11-18 11:47:53.020694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57894 ] 00:04:55.516 [2024-11-18 11:47:53.175964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.774 [2024-11-18 11:47:53.284516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.341 11:47:53 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.341 11:47:53 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:56.341 11:47:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:56.600 11:47:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57894 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57894 ']' 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57894 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57894 00:04:56.600 killing process with pid 57894 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57894' 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@971 -- # kill 57894 00:04:56.600 11:47:54 alias_rpc -- common/autotest_common.sh@976 -- # wait 57894 00:04:57.981 ************************************ 00:04:57.981 END TEST alias_rpc 00:04:57.981 ************************************ 00:04:57.981 00:04:57.981 real 0m2.878s 00:04:57.981 user 0m2.957s 00:04:57.981 sys 0m0.414s 00:04:57.981 11:47:55 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.981 11:47:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.242 11:47:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.242 11:47:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.242 11:47:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.242 11:47:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.242 11:47:55 -- common/autotest_common.sh@10 -- # set +x 00:04:58.242 ************************************ 00:04:58.242 START TEST spdkcli_tcp 00:04:58.242 ************************************ 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.242 * Looking for test storage... 00:04:58.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.242 11:47:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.242 --rc genhtml_branch_coverage=1 00:04:58.242 --rc genhtml_function_coverage=1 00:04:58.242 --rc genhtml_legend=1 00:04:58.242 --rc geninfo_all_blocks=1 00:04:58.242 --rc geninfo_unexecuted_blocks=1 00:04:58.242 00:04:58.242 ' 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.242 --rc genhtml_branch_coverage=1 00:04:58.242 --rc genhtml_function_coverage=1 00:04:58.242 --rc genhtml_legend=1 00:04:58.242 --rc geninfo_all_blocks=1 00:04:58.242 --rc geninfo_unexecuted_blocks=1 00:04:58.242 00:04:58.242 ' 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.242 --rc genhtml_branch_coverage=1 00:04:58.242 --rc genhtml_function_coverage=1 00:04:58.242 --rc genhtml_legend=1 00:04:58.242 --rc geninfo_all_blocks=1 00:04:58.242 --rc geninfo_unexecuted_blocks=1 00:04:58.242 00:04:58.242 ' 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.242 --rc genhtml_branch_coverage=1 00:04:58.242 --rc genhtml_function_coverage=1 00:04:58.242 --rc genhtml_legend=1 00:04:58.242 --rc geninfo_all_blocks=1 00:04:58.242 --rc geninfo_unexecuted_blocks=1 00:04:58.242 00:04:58.242 ' 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57987 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57987 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57987 ']' 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.242 11:47:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.242 11:47:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.504 [2024-11-18 11:47:55.970610] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:04:58.504 [2024-11-18 11:47:55.970757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57987 ] 00:04:58.504 [2024-11-18 11:47:56.130491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.766 [2024-11-18 11:47:56.258777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.766 [2024-11-18 11:47:56.258889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.333 11:47:56 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.333 11:47:56 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:59.333 11:47:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58003 00:04:59.333 11:47:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.333 11:47:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:59.592 [ 00:04:59.593 "bdev_malloc_delete", 00:04:59.593 "bdev_malloc_create", 00:04:59.593 "bdev_null_resize", 00:04:59.593 "bdev_null_delete", 00:04:59.593 "bdev_null_create", 00:04:59.593 "bdev_nvme_cuse_unregister", 00:04:59.593 "bdev_nvme_cuse_register", 00:04:59.593 "bdev_opal_new_user", 00:04:59.593 "bdev_opal_set_lock_state", 00:04:59.593 "bdev_opal_delete", 00:04:59.593 "bdev_opal_get_info", 00:04:59.593 "bdev_opal_create", 00:04:59.593 "bdev_nvme_opal_revert", 00:04:59.593 "bdev_nvme_opal_init", 00:04:59.593 "bdev_nvme_send_cmd", 00:04:59.593 "bdev_nvme_set_keys", 00:04:59.593 "bdev_nvme_get_path_iostat", 00:04:59.593 "bdev_nvme_get_mdns_discovery_info", 00:04:59.593 "bdev_nvme_stop_mdns_discovery", 00:04:59.593 "bdev_nvme_start_mdns_discovery", 00:04:59.593 "bdev_nvme_set_multipath_policy", 00:04:59.593 "bdev_nvme_set_preferred_path", 00:04:59.593 "bdev_nvme_get_io_paths", 00:04:59.593 "bdev_nvme_remove_error_injection", 00:04:59.593 "bdev_nvme_add_error_injection", 00:04:59.593 "bdev_nvme_get_discovery_info", 00:04:59.593 "bdev_nvme_stop_discovery", 00:04:59.593 "bdev_nvme_start_discovery", 00:04:59.593 "bdev_nvme_get_controller_health_info", 00:04:59.593 "bdev_nvme_disable_controller", 00:04:59.593 "bdev_nvme_enable_controller", 00:04:59.593 "bdev_nvme_reset_controller", 00:04:59.593 "bdev_nvme_get_transport_statistics", 00:04:59.593 "bdev_nvme_apply_firmware", 00:04:59.593 "bdev_nvme_detach_controller", 00:04:59.593 "bdev_nvme_get_controllers", 00:04:59.593 "bdev_nvme_attach_controller", 00:04:59.593 "bdev_nvme_set_hotplug", 00:04:59.593 "bdev_nvme_set_options", 00:04:59.593 "bdev_passthru_delete", 00:04:59.593 "bdev_passthru_create", 00:04:59.593 "bdev_lvol_set_parent_bdev", 00:04:59.593 "bdev_lvol_set_parent", 00:04:59.593 "bdev_lvol_check_shallow_copy", 00:04:59.593 "bdev_lvol_start_shallow_copy", 00:04:59.593 "bdev_lvol_grow_lvstore", 00:04:59.593 "bdev_lvol_get_lvols", 00:04:59.593 "bdev_lvol_get_lvstores", 00:04:59.593 "bdev_lvol_delete", 00:04:59.593 "bdev_lvol_set_read_only", 00:04:59.593 "bdev_lvol_resize", 00:04:59.593 "bdev_lvol_decouple_parent", 00:04:59.593 "bdev_lvol_inflate", 00:04:59.593 "bdev_lvol_rename", 00:04:59.593 "bdev_lvol_clone_bdev", 00:04:59.593 "bdev_lvol_clone", 00:04:59.593 "bdev_lvol_snapshot", 00:04:59.593 "bdev_lvol_create", 00:04:59.593 "bdev_lvol_delete_lvstore", 00:04:59.593 "bdev_lvol_rename_lvstore", 00:04:59.593 "bdev_lvol_create_lvstore", 00:04:59.593 "bdev_raid_set_options", 00:04:59.593 "bdev_raid_remove_base_bdev", 00:04:59.593 "bdev_raid_add_base_bdev", 00:04:59.593 "bdev_raid_delete", 00:04:59.593 "bdev_raid_create", 00:04:59.593 "bdev_raid_get_bdevs", 00:04:59.593 "bdev_error_inject_error", 00:04:59.593 "bdev_error_delete", 00:04:59.593 "bdev_error_create", 00:04:59.593 "bdev_split_delete", 00:04:59.593 "bdev_split_create", 00:04:59.593 "bdev_delay_delete", 00:04:59.593 "bdev_delay_create", 00:04:59.593 "bdev_delay_update_latency", 00:04:59.593 "bdev_zone_block_delete", 00:04:59.593 "bdev_zone_block_create", 00:04:59.593 "blobfs_create", 00:04:59.593 "blobfs_detect", 00:04:59.593 "blobfs_set_cache_size", 00:04:59.593 "bdev_xnvme_delete", 00:04:59.593 "bdev_xnvme_create", 00:04:59.593 "bdev_aio_delete", 00:04:59.593 "bdev_aio_rescan", 00:04:59.593 "bdev_aio_create", 00:04:59.593 "bdev_ftl_set_property", 00:04:59.593 "bdev_ftl_get_properties", 00:04:59.593 "bdev_ftl_get_stats", 00:04:59.593 "bdev_ftl_unmap", 00:04:59.593 "bdev_ftl_unload", 00:04:59.593 "bdev_ftl_delete", 00:04:59.593 "bdev_ftl_load", 00:04:59.593 "bdev_ftl_create", 00:04:59.593 "bdev_virtio_attach_controller", 00:04:59.593 "bdev_virtio_scsi_get_devices", 00:04:59.593 "bdev_virtio_detach_controller", 00:04:59.593 "bdev_virtio_blk_set_hotplug", 00:04:59.593 "bdev_iscsi_delete", 00:04:59.593 "bdev_iscsi_create", 00:04:59.593 "bdev_iscsi_set_options", 00:04:59.593 "accel_error_inject_error", 00:04:59.593 "ioat_scan_accel_module", 00:04:59.593 "dsa_scan_accel_module", 00:04:59.593 "iaa_scan_accel_module", 00:04:59.593 "keyring_file_remove_key", 00:04:59.593 "keyring_file_add_key", 00:04:59.593 "keyring_linux_set_options", 00:04:59.593 "fsdev_aio_delete", 00:04:59.593 "fsdev_aio_create", 00:04:59.593 "iscsi_get_histogram", 00:04:59.593 "iscsi_enable_histogram", 00:04:59.593 "iscsi_set_options", 00:04:59.593 "iscsi_get_auth_groups", 00:04:59.593 "iscsi_auth_group_remove_secret", 00:04:59.593 "iscsi_auth_group_add_secret", 00:04:59.593 "iscsi_delete_auth_group", 00:04:59.593 "iscsi_create_auth_group", 00:04:59.593 "iscsi_set_discovery_auth", 00:04:59.593 "iscsi_get_options", 00:04:59.593 "iscsi_target_node_request_logout", 00:04:59.593 "iscsi_target_node_set_redirect", 00:04:59.593 "iscsi_target_node_set_auth", 00:04:59.593 "iscsi_target_node_add_lun", 00:04:59.593 "iscsi_get_stats", 00:04:59.593 "iscsi_get_connections", 00:04:59.593 "iscsi_portal_group_set_auth", 00:04:59.593 "iscsi_start_portal_group", 00:04:59.593 "iscsi_delete_portal_group", 00:04:59.593 "iscsi_create_portal_group", 00:04:59.593 "iscsi_get_portal_groups", 00:04:59.593 "iscsi_delete_target_node", 00:04:59.593 "iscsi_target_node_remove_pg_ig_maps", 00:04:59.593 "iscsi_target_node_add_pg_ig_maps", 00:04:59.593 "iscsi_create_target_node", 00:04:59.593 "iscsi_get_target_nodes", 00:04:59.593 "iscsi_delete_initiator_group", 00:04:59.593 "iscsi_initiator_group_remove_initiators", 00:04:59.593 "iscsi_initiator_group_add_initiators", 00:04:59.593 "iscsi_create_initiator_group", 00:04:59.593 "iscsi_get_initiator_groups", 00:04:59.593 "nvmf_set_crdt", 00:04:59.593 "nvmf_set_config", 00:04:59.593 "nvmf_set_max_subsystems", 00:04:59.593 "nvmf_stop_mdns_prr", 00:04:59.593 "nvmf_publish_mdns_prr", 00:04:59.593 "nvmf_subsystem_get_listeners", 00:04:59.593 "nvmf_subsystem_get_qpairs", 00:04:59.593 "nvmf_subsystem_get_controllers", 00:04:59.593 "nvmf_get_stats", 00:04:59.593 "nvmf_get_transports", 00:04:59.593 "nvmf_create_transport", 00:04:59.593 "nvmf_get_targets", 00:04:59.593 "nvmf_delete_target", 00:04:59.593 "nvmf_create_target", 00:04:59.593 "nvmf_subsystem_allow_any_host", 00:04:59.593 "nvmf_subsystem_set_keys", 00:04:59.593 "nvmf_discovery_referral_remove_host", 00:04:59.593 "nvmf_discovery_referral_add_host", 00:04:59.593 "nvmf_subsystem_remove_host", 00:04:59.593 "nvmf_subsystem_add_host", 00:04:59.593 "nvmf_ns_remove_host", 00:04:59.593 "nvmf_ns_add_host", 00:04:59.593 "nvmf_subsystem_remove_ns", 00:04:59.593 "nvmf_subsystem_set_ns_ana_group", 00:04:59.593 "nvmf_subsystem_add_ns", 00:04:59.593 "nvmf_subsystem_listener_set_ana_state", 00:04:59.593 "nvmf_discovery_get_referrals", 00:04:59.593 "nvmf_discovery_remove_referral", 00:04:59.593 "nvmf_discovery_add_referral", 00:04:59.593 "nvmf_subsystem_remove_listener", 00:04:59.593 "nvmf_subsystem_add_listener", 00:04:59.593 "nvmf_delete_subsystem", 00:04:59.593 "nvmf_create_subsystem", 00:04:59.593 "nvmf_get_subsystems", 00:04:59.593 "env_dpdk_get_mem_stats", 00:04:59.593 "nbd_get_disks", 00:04:59.593 "nbd_stop_disk", 00:04:59.593 "nbd_start_disk", 00:04:59.593 "ublk_recover_disk", 00:04:59.593 "ublk_get_disks", 00:04:59.593 "ublk_stop_disk", 00:04:59.593 "ublk_start_disk", 00:04:59.593 "ublk_destroy_target", 00:04:59.593 "ublk_create_target", 00:04:59.593 "virtio_blk_create_transport", 00:04:59.593 "virtio_blk_get_transports", 00:04:59.593 "vhost_controller_set_coalescing", 00:04:59.593 "vhost_get_controllers", 00:04:59.593 "vhost_delete_controller", 00:04:59.593 "vhost_create_blk_controller", 00:04:59.593 "vhost_scsi_controller_remove_target", 00:04:59.593 "vhost_scsi_controller_add_target", 00:04:59.593 "vhost_start_scsi_controller", 00:04:59.593 "vhost_create_scsi_controller", 00:04:59.593 "thread_set_cpumask", 00:04:59.593 "scheduler_set_options", 00:04:59.593 "framework_get_governor", 00:04:59.593 "framework_get_scheduler", 00:04:59.593 "framework_set_scheduler", 00:04:59.593 "framework_get_reactors", 00:04:59.593 "thread_get_io_channels", 00:04:59.593 "thread_get_pollers", 00:04:59.593 "thread_get_stats", 00:04:59.593 "framework_monitor_context_switch", 00:04:59.593 "spdk_kill_instance", 00:04:59.593 "log_enable_timestamps", 00:04:59.593 "log_get_flags", 00:04:59.593 "log_clear_flag", 00:04:59.593 "log_set_flag", 00:04:59.593 "log_get_level", 00:04:59.593 "log_set_level", 00:04:59.593 "log_get_print_level", 00:04:59.593 "log_set_print_level", 00:04:59.593 "framework_enable_cpumask_locks", 00:04:59.593 "framework_disable_cpumask_locks", 00:04:59.593 "framework_wait_init", 00:04:59.593 "framework_start_init", 00:04:59.593 "scsi_get_devices", 00:04:59.593 "bdev_get_histogram", 00:04:59.593 "bdev_enable_histogram", 00:04:59.593 "bdev_set_qos_limit", 00:04:59.593 "bdev_set_qd_sampling_period", 00:04:59.593 "bdev_get_bdevs", 00:04:59.593 "bdev_reset_iostat", 00:04:59.593 "bdev_get_iostat", 00:04:59.593 "bdev_examine", 00:04:59.593 "bdev_wait_for_examine", 00:04:59.593 "bdev_set_options", 00:04:59.593 "accel_get_stats", 00:04:59.593 "accel_set_options", 00:04:59.593 "accel_set_driver", 00:04:59.593 "accel_crypto_key_destroy", 00:04:59.593 "accel_crypto_keys_get", 00:04:59.594 "accel_crypto_key_create", 00:04:59.594 "accel_assign_opc", 00:04:59.594 "accel_get_module_info", 00:04:59.594 "accel_get_opc_assignments", 00:04:59.594 "vmd_rescan", 00:04:59.594 "vmd_remove_device", 00:04:59.594 "vmd_enable", 00:04:59.594 "sock_get_default_impl", 00:04:59.594 "sock_set_default_impl", 00:04:59.594 "sock_impl_set_options", 00:04:59.594 "sock_impl_get_options", 00:04:59.594 "iobuf_get_stats", 00:04:59.594 "iobuf_set_options", 00:04:59.594 "keyring_get_keys", 00:04:59.594 "framework_get_pci_devices", 00:04:59.594 "framework_get_config", 00:04:59.594 "framework_get_subsystems", 00:04:59.594 "fsdev_set_opts", 00:04:59.594 "fsdev_get_opts", 00:04:59.594 "trace_get_info", 00:04:59.594 "trace_get_tpoint_group_mask", 00:04:59.594 "trace_disable_tpoint_group", 00:04:59.594 "trace_enable_tpoint_group", 00:04:59.594 "trace_clear_tpoint_mask", 00:04:59.594 "trace_set_tpoint_mask", 00:04:59.594 "notify_get_notifications", 00:04:59.594 "notify_get_types", 00:04:59.594 "spdk_get_version", 00:04:59.594 "rpc_get_methods" 00:04:59.594 ] 00:04:59.594 11:47:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.594 11:47:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:59.594 11:47:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57987 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57987 ']' 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57987 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57987 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57987' 00:04:59.594 killing process with pid 57987 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57987 00:04:59.594 11:47:57 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57987 00:05:00.975 00:05:00.975 real 0m2.916s 00:05:00.975 user 0m5.166s 00:05:00.975 sys 0m0.513s 00:05:00.975 11:47:58 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.975 11:47:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.975 ************************************ 00:05:00.975 END TEST spdkcli_tcp 00:05:00.975 ************************************ 00:05:01.236 11:47:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.236 11:47:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.236 11:47:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.236 11:47:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.236 ************************************ 00:05:01.236 START TEST dpdk_mem_utility 00:05:01.236 ************************************ 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.236 * Looking for test storage... 00:05:01.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.236 11:47:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.236 --rc genhtml_branch_coverage=1 00:05:01.236 --rc genhtml_function_coverage=1 00:05:01.236 --rc genhtml_legend=1 00:05:01.236 --rc geninfo_all_blocks=1 00:05:01.236 --rc geninfo_unexecuted_blocks=1 00:05:01.236 00:05:01.236 ' 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.236 --rc genhtml_branch_coverage=1 00:05:01.236 --rc genhtml_function_coverage=1 00:05:01.236 --rc genhtml_legend=1 00:05:01.236 --rc geninfo_all_blocks=1 00:05:01.236 --rc geninfo_unexecuted_blocks=1 00:05:01.236 00:05:01.236 ' 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.236 --rc genhtml_branch_coverage=1 00:05:01.236 --rc genhtml_function_coverage=1 00:05:01.236 --rc genhtml_legend=1 00:05:01.236 --rc geninfo_all_blocks=1 00:05:01.236 --rc geninfo_unexecuted_blocks=1 00:05:01.236 00:05:01.236 ' 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.236 --rc genhtml_branch_coverage=1 00:05:01.236 --rc genhtml_function_coverage=1 00:05:01.236 --rc genhtml_legend=1 00:05:01.236 --rc geninfo_all_blocks=1 00:05:01.236 --rc geninfo_unexecuted_blocks=1 00:05:01.236 00:05:01.236 ' 00:05:01.236 11:47:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.236 11:47:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58096 00:05:01.236 11:47:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.236 11:47:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58096 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58096 ']' 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.236 11:47:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.236 [2024-11-18 11:47:58.885188] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:01.237 [2024-11-18 11:47:58.885424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58096 ] 00:05:01.498 [2024-11-18 11:47:59.040140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.498 [2024-11-18 11:47:59.168344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.439 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.439 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:02.439 11:47:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:02.439 11:47:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:02.439 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.439 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.439 { 00:05:02.439 "filename": "/tmp/spdk_mem_dump.txt" 00:05:02.439 } 00:05:02.439 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.439 11:47:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:02.439 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:02.439 1 heaps totaling size 816.000000 MiB 00:05:02.439 size: 816.000000 MiB heap id: 0 00:05:02.439 end heaps---------- 00:05:02.439 9 mempools totaling size 595.772034 MiB 00:05:02.439 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:02.439 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:02.439 size: 92.545471 MiB name: bdev_io_58096 00:05:02.439 size: 50.003479 MiB name: msgpool_58096 00:05:02.439 size: 36.509338 MiB name: fsdev_io_58096 00:05:02.439 size: 21.763794 MiB name: PDU_Pool 00:05:02.439 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:02.439 size: 4.133484 MiB name: evtpool_58096 00:05:02.439 size: 0.026123 MiB name: Session_Pool 00:05:02.439 end mempools------- 00:05:02.439 6 memzones totaling size 4.142822 MiB 00:05:02.440 size: 1.000366 MiB name: RG_ring_0_58096 00:05:02.440 size: 1.000366 MiB name: RG_ring_1_58096 00:05:02.440 size: 1.000366 MiB name: RG_ring_4_58096 00:05:02.440 size: 1.000366 MiB name: RG_ring_5_58096 00:05:02.440 size: 0.125366 MiB name: RG_ring_2_58096 00:05:02.440 size: 0.015991 MiB name: RG_ring_3_58096 00:05:02.440 end memzones------- 00:05:02.440 11:47:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:02.440 heap id: 0 total size: 816.000000 MiB number of busy elements: 323 number of free elements: 18 00:05:02.440 list of free elements. size: 16.789429 MiB 00:05:02.440 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:02.440 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:02.440 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:02.440 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:02.440 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:02.440 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:02.440 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:02.440 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:02.440 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:02.440 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:02.440 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:02.440 element at address: 0x20001ac00000 with size: 0.559998 MiB 00:05:02.440 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:02.440 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:02.440 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:02.440 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:02.440 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:02.440 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:02.440 list of standard malloc elements. size: 199.289673 MiB 00:05:02.440 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:02.440 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:02.440 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:02.440 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:02.440 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:02.440 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:02.440 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:02.440 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:02.440 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:02.440 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:02.440 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:02.440 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:02.440 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:02.440 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:02.441 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:02.441 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:02.441 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:02.441 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:02.442 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:02.442 list of memzone associated elements. size: 599.920898 MiB 00:05:02.442 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:02.442 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:02.442 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:02.442 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:02.442 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:02.442 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58096_0 00:05:02.442 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:02.442 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58096_0 00:05:02.442 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:02.442 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58096_0 00:05:02.442 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:02.442 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:02.442 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:02.442 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:02.442 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:02.442 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58096_0 00:05:02.442 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:02.442 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58096 00:05:02.442 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:02.442 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58096 00:05:02.442 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:02.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:02.442 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:02.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:02.442 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:02.442 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:02.442 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:02.442 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:02.442 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:02.442 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58096 00:05:02.442 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:02.442 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58096 00:05:02.442 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:02.442 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58096 00:05:02.442 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:02.442 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58096 00:05:02.442 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:02.442 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58096 00:05:02.442 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:02.442 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58096 00:05:02.442 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:02.442 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:02.442 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:02.442 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:02.442 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:02.442 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:02.442 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:02.442 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58096 00:05:02.442 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:02.442 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58096 00:05:02.442 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:02.442 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:02.442 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:02.442 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:02.442 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:02.442 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58096 00:05:02.442 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:02.442 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:02.442 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:02.442 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58096 00:05:02.442 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:02.442 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58096 00:05:02.442 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:02.442 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58096 00:05:02.442 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:02.442 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:02.442 11:47:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:02.442 11:47:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58096 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58096 ']' 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58096 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58096 00:05:02.442 killing process with pid 58096 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58096' 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58096 00:05:02.442 11:47:59 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58096 00:05:03.818 00:05:03.818 real 0m2.673s 00:05:03.818 user 0m2.620s 00:05:03.818 sys 0m0.432s 00:05:03.818 11:48:01 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.818 11:48:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.818 ************************************ 00:05:03.818 END TEST dpdk_mem_utility 00:05:03.818 ************************************ 00:05:03.818 11:48:01 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.818 11:48:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.818 11:48:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.818 11:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.818 ************************************ 00:05:03.818 START TEST event 00:05:03.818 ************************************ 00:05:03.818 11:48:01 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.818 * Looking for test storage... 00:05:03.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:03.818 11:48:01 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.818 11:48:01 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.818 11:48:01 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.076 11:48:01 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.076 11:48:01 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.076 11:48:01 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.076 11:48:01 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.076 11:48:01 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.076 11:48:01 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.076 11:48:01 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.076 11:48:01 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.076 11:48:01 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.076 11:48:01 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.076 11:48:01 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.076 11:48:01 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.076 11:48:01 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.076 11:48:01 event -- scripts/common.sh@345 -- # : 1 00:05:04.076 11:48:01 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.076 11:48:01 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.076 11:48:01 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.076 11:48:01 event -- scripts/common.sh@353 -- # local d=1 00:05:04.076 11:48:01 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.076 11:48:01 event -- scripts/common.sh@355 -- # echo 1 00:05:04.076 11:48:01 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.076 11:48:01 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.076 11:48:01 event -- scripts/common.sh@353 -- # local d=2 00:05:04.076 11:48:01 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.076 11:48:01 event -- scripts/common.sh@355 -- # echo 2 00:05:04.076 11:48:01 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.076 11:48:01 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.076 11:48:01 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.076 11:48:01 event -- scripts/common.sh@368 -- # return 0 00:05:04.076 11:48:01 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.076 11:48:01 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.077 --rc genhtml_branch_coverage=1 00:05:04.077 --rc genhtml_function_coverage=1 00:05:04.077 --rc genhtml_legend=1 00:05:04.077 --rc geninfo_all_blocks=1 00:05:04.077 --rc geninfo_unexecuted_blocks=1 00:05:04.077 00:05:04.077 ' 00:05:04.077 11:48:01 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.077 --rc genhtml_branch_coverage=1 00:05:04.077 --rc genhtml_function_coverage=1 00:05:04.077 --rc genhtml_legend=1 00:05:04.077 --rc geninfo_all_blocks=1 00:05:04.077 --rc geninfo_unexecuted_blocks=1 00:05:04.077 00:05:04.077 ' 00:05:04.077 11:48:01 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.077 --rc genhtml_branch_coverage=1 00:05:04.077 --rc genhtml_function_coverage=1 00:05:04.077 --rc genhtml_legend=1 00:05:04.077 --rc geninfo_all_blocks=1 00:05:04.077 --rc geninfo_unexecuted_blocks=1 00:05:04.077 00:05:04.077 ' 00:05:04.077 11:48:01 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.077 --rc genhtml_branch_coverage=1 00:05:04.077 --rc genhtml_function_coverage=1 00:05:04.077 --rc genhtml_legend=1 00:05:04.077 --rc geninfo_all_blocks=1 00:05:04.077 --rc geninfo_unexecuted_blocks=1 00:05:04.077 00:05:04.077 ' 00:05:04.077 11:48:01 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:04.077 11:48:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.077 11:48:01 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.077 11:48:01 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:04.077 11:48:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.077 11:48:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.077 ************************************ 00:05:04.077 START TEST event_perf 00:05:04.077 ************************************ 00:05:04.077 11:48:01 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.077 Running I/O for 1 seconds...[2024-11-18 11:48:01.590191] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:04.077 [2024-11-18 11:48:01.590294] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58193 ] 00:05:04.077 [2024-11-18 11:48:01.747968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.336 [2024-11-18 11:48:01.847200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.336 [2024-11-18 11:48:01.847538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.336 Running I/O for 1 seconds...[2024-11-18 11:48:01.847659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.336 [2024-11-18 11:48:01.847679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.708 00:05:05.708 lcore 0: 202504 00:05:05.708 lcore 1: 202500 00:05:05.708 lcore 2: 202503 00:05:05.708 lcore 3: 202504 00:05:05.708 done. 00:05:05.708 00:05:05.708 real 0m1.455s 00:05:05.708 user 0m4.255s 00:05:05.708 sys 0m0.082s 00:05:05.708 11:48:03 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.708 11:48:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.708 ************************************ 00:05:05.708 END TEST event_perf 00:05:05.708 ************************************ 00:05:05.708 11:48:03 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.708 11:48:03 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:05.708 11:48:03 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.708 11:48:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.708 ************************************ 00:05:05.708 START TEST event_reactor 00:05:05.708 ************************************ 00:05:05.708 11:48:03 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.708 [2024-11-18 11:48:03.087698] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:05.708 [2024-11-18 11:48:03.087815] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58227 ] 00:05:05.708 [2024-11-18 11:48:03.249479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.708 [2024-11-18 11:48:03.345593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.079 test_start 00:05:07.079 oneshot 00:05:07.079 tick 100 00:05:07.079 tick 100 00:05:07.079 tick 250 00:05:07.079 tick 100 00:05:07.079 tick 100 00:05:07.079 tick 100 00:05:07.079 tick 250 00:05:07.079 tick 500 00:05:07.079 tick 100 00:05:07.079 tick 100 00:05:07.079 tick 250 00:05:07.079 tick 100 00:05:07.079 tick 100 00:05:07.079 test_end 00:05:07.079 00:05:07.079 real 0m1.437s 00:05:07.079 user 0m1.267s 00:05:07.079 sys 0m0.063s 00:05:07.079 11:48:04 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.079 11:48:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.079 ************************************ 00:05:07.079 END TEST event_reactor 00:05:07.079 ************************************ 00:05:07.079 11:48:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.079 11:48:04 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:07.079 11:48:04 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.079 11:48:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.079 ************************************ 00:05:07.079 START TEST event_reactor_perf 00:05:07.079 ************************************ 00:05:07.079 11:48:04 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.079 [2024-11-18 11:48:04.567125] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:07.079 [2024-11-18 11:48:04.567229] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58269 ] 00:05:07.079 [2024-11-18 11:48:04.725194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.337 [2024-11-18 11:48:04.822792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.270 test_start 00:05:08.270 test_end 00:05:08.270 Performance: 315767 events per second 00:05:08.529 00:05:08.529 real 0m1.434s 00:05:08.529 user 0m1.267s 00:05:08.529 sys 0m0.060s 00:05:08.529 11:48:05 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.529 11:48:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.529 ************************************ 00:05:08.529 END TEST event_reactor_perf 00:05:08.529 ************************************ 00:05:08.529 11:48:06 event -- event/event.sh@49 -- # uname -s 00:05:08.529 11:48:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.529 11:48:06 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.529 11:48:06 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.529 11:48:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.529 11:48:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.529 ************************************ 00:05:08.529 START TEST event_scheduler 00:05:08.529 ************************************ 00:05:08.529 11:48:06 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.529 * Looking for test storage... 00:05:08.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:08.529 11:48:06 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.529 11:48:06 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.529 11:48:06 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.529 11:48:06 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.530 11:48:06 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.530 --rc genhtml_branch_coverage=1 00:05:08.530 --rc genhtml_function_coverage=1 00:05:08.530 --rc genhtml_legend=1 00:05:08.530 --rc geninfo_all_blocks=1 00:05:08.530 --rc geninfo_unexecuted_blocks=1 00:05:08.530 00:05:08.530 ' 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.530 --rc genhtml_branch_coverage=1 00:05:08.530 --rc genhtml_function_coverage=1 00:05:08.530 --rc genhtml_legend=1 00:05:08.530 --rc geninfo_all_blocks=1 00:05:08.530 --rc geninfo_unexecuted_blocks=1 00:05:08.530 00:05:08.530 ' 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.530 --rc genhtml_branch_coverage=1 00:05:08.530 --rc genhtml_function_coverage=1 00:05:08.530 --rc genhtml_legend=1 00:05:08.530 --rc geninfo_all_blocks=1 00:05:08.530 --rc geninfo_unexecuted_blocks=1 00:05:08.530 00:05:08.530 ' 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.530 --rc genhtml_branch_coverage=1 00:05:08.530 --rc genhtml_function_coverage=1 00:05:08.530 --rc genhtml_legend=1 00:05:08.530 --rc geninfo_all_blocks=1 00:05:08.530 --rc geninfo_unexecuted_blocks=1 00:05:08.530 00:05:08.530 ' 00:05:08.530 11:48:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.530 11:48:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58334 00:05:08.530 11:48:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.530 11:48:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58334 00:05:08.530 11:48:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58334 ']' 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.530 11:48:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.530 [2024-11-18 11:48:06.222324] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:08.530 [2024-11-18 11:48:06.222448] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58334 ] 00:05:08.788 [2024-11-18 11:48:06.380832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.788 [2024-11-18 11:48:06.484091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.788 [2024-11-18 11:48:06.484355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.788 [2024-11-18 11:48:06.484676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.788 [2024-11-18 11:48:06.484693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.351 11:48:07 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.351 11:48:07 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:09.351 11:48:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.351 11:48:07 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.351 11:48:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.351 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.351 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.351 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.351 POWER: Cannot set governor of lcore 0 to performance 00:05:09.351 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.351 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.351 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.351 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.351 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:09.351 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:09.351 POWER: Unable to set Power Management Environment for lcore 0 00:05:09.351 [2024-11-18 11:48:07.025944] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:09.351 [2024-11-18 11:48:07.025970] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:09.351 [2024-11-18 11:48:07.025979] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.351 [2024-11-18 11:48:07.025994] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.351 [2024-11-18 11:48:07.026014] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.351 [2024-11-18 11:48:07.026023] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.351 11:48:07 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.351 11:48:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.351 11:48:07 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.351 11:48:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.609 [2024-11-18 11:48:07.248214] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.609 11:48:07 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.609 11:48:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.609 11:48:07 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.609 11:48:07 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.609 11:48:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.609 ************************************ 00:05:09.609 START TEST scheduler_create_thread 00:05:09.609 ************************************ 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.609 2 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.609 3 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.609 4 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.609 5 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.609 6 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.609 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.866 7 00:05:09.866 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.866 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.866 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.867 8 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.867 9 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.867 10 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.867 00:05:09.867 real 0m0.109s 00:05:09.867 user 0m0.011s 00:05:09.867 sys 0m0.005s 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.867 ************************************ 00:05:09.867 END TEST scheduler_create_thread 00:05:09.867 11:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.867 ************************************ 00:05:09.867 11:48:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:09.867 11:48:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58334 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58334 ']' 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58334 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58334 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:09.867 killing process with pid 58334 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58334' 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58334 00:05:09.867 11:48:07 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58334 00:05:10.432 [2024-11-18 11:48:07.852868] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.998 00:05:10.998 real 0m2.483s 00:05:10.998 user 0m4.129s 00:05:10.998 sys 0m0.343s 00:05:10.998 11:48:08 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.998 ************************************ 00:05:10.998 END TEST event_scheduler 00:05:10.998 ************************************ 00:05:10.998 11:48:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.998 11:48:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.998 11:48:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.998 11:48:08 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.998 11:48:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.998 11:48:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.998 ************************************ 00:05:10.998 START TEST app_repeat 00:05:10.998 ************************************ 00:05:10.998 11:48:08 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58412 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.998 Process app_repeat pid: 58412 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58412' 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.998 spdk_app_start Round 0 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:10.998 11:48:08 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58412 ']' 00:05:10.998 11:48:08 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.998 11:48:08 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.998 11:48:08 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.998 11:48:08 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.998 11:48:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.998 11:48:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.998 [2024-11-18 11:48:08.585105] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:10.998 [2024-11-18 11:48:08.585240] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58412 ] 00:05:11.256 [2024-11-18 11:48:08.745926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.256 [2024-11-18 11:48:08.845205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.256 [2024-11-18 11:48:08.845214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.857 11:48:09 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.857 11:48:09 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:11.857 11:48:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.115 Malloc0 00:05:12.115 11:48:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.372 Malloc1 00:05:12.372 11:48:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.372 11:48:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.631 /dev/nbd0 00:05:12.631 11:48:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.631 11:48:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.631 1+0 records in 00:05:12.631 1+0 records out 00:05:12.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016131 s, 25.4 MB/s 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:12.631 11:48:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:12.631 11:48:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.631 11:48:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.631 11:48:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.890 /dev/nbd1 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.890 1+0 records in 00:05:12.890 1+0 records out 00:05:12.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178501 s, 22.9 MB/s 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:12.890 11:48:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.890 { 00:05:12.890 "nbd_device": "/dev/nbd0", 00:05:12.890 "bdev_name": "Malloc0" 00:05:12.890 }, 00:05:12.890 { 00:05:12.890 "nbd_device": "/dev/nbd1", 00:05:12.890 "bdev_name": "Malloc1" 00:05:12.890 } 00:05:12.890 ]' 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.890 { 00:05:12.890 "nbd_device": "/dev/nbd0", 00:05:12.890 "bdev_name": "Malloc0" 00:05:12.890 }, 00:05:12.890 { 00:05:12.890 "nbd_device": "/dev/nbd1", 00:05:12.890 "bdev_name": "Malloc1" 00:05:12.890 } 00:05:12.890 ]' 00:05:12.890 11:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.149 /dev/nbd1' 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.149 /dev/nbd1' 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.149 256+0 records in 00:05:13.149 256+0 records out 00:05:13.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00745495 s, 141 MB/s 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.149 256+0 records in 00:05:13.149 256+0 records out 00:05:13.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188695 s, 55.6 MB/s 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.149 11:48:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.149 256+0 records in 00:05:13.149 256+0 records out 00:05:13.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192835 s, 54.4 MB/s 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.150 11:48:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.409 11:48:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.409 11:48:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.668 11:48:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.668 11:48:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.235 11:48:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.803 [2024-11-18 11:48:12.253686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.803 [2024-11-18 11:48:12.326377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.803 [2024-11-18 11:48:12.326489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.803 [2024-11-18 11:48:12.423830] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.803 [2024-11-18 11:48:12.423876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.343 11:48:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.343 spdk_app_start Round 1 00:05:17.343 11:48:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.343 11:48:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:17.343 11:48:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58412 ']' 00:05:17.343 11:48:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.343 11:48:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.343 11:48:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.343 11:48:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.343 11:48:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.343 11:48:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.343 11:48:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:17.343 11:48:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.602 Malloc0 00:05:17.602 11:48:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.860 Malloc1 00:05:17.860 11:48:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.860 /dev/nbd0 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.860 11:48:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.860 1+0 records in 00:05:17.860 1+0 records out 00:05:17.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027791 s, 14.7 MB/s 00:05:17.860 11:48:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:18.119 11:48:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.119 11:48:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.119 11:48:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.119 /dev/nbd1 00:05:18.119 11:48:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.119 11:48:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:18.119 11:48:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:18.120 11:48:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:18.120 11:48:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:18.120 11:48:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.120 1+0 records in 00:05:18.120 1+0 records out 00:05:18.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209296 s, 19.6 MB/s 00:05:18.120 11:48:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.120 11:48:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:18.120 11:48:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.120 11:48:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:18.120 11:48:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:18.120 11:48:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.120 11:48:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.120 11:48:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.120 11:48:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.120 11:48:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.379 11:48:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.379 { 00:05:18.379 "nbd_device": "/dev/nbd0", 00:05:18.379 "bdev_name": "Malloc0" 00:05:18.379 }, 00:05:18.379 { 00:05:18.379 "nbd_device": "/dev/nbd1", 00:05:18.379 "bdev_name": "Malloc1" 00:05:18.379 } 00:05:18.379 ]' 00:05:18.379 11:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.379 { 00:05:18.379 "nbd_device": "/dev/nbd0", 00:05:18.379 "bdev_name": "Malloc0" 00:05:18.379 }, 00:05:18.379 { 00:05:18.379 "nbd_device": "/dev/nbd1", 00:05:18.379 "bdev_name": "Malloc1" 00:05:18.379 } 00:05:18.379 ]' 00:05:18.379 11:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.379 /dev/nbd1' 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.379 /dev/nbd1' 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.379 256+0 records in 00:05:18.379 256+0 records out 00:05:18.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00880902 s, 119 MB/s 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.379 256+0 records in 00:05:18.379 256+0 records out 00:05:18.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151791 s, 69.1 MB/s 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.379 11:48:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.638 256+0 records in 00:05:18.638 256+0 records out 00:05:18.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173774 s, 60.3 MB/s 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.638 11:48:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.897 11:48:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.158 11:48:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.158 11:48:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.416 11:48:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.983 [2024-11-18 11:48:17.537785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.983 [2024-11-18 11:48:17.616340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.983 [2024-11-18 11:48:17.616356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.243 [2024-11-18 11:48:17.716898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.243 [2024-11-18 11:48:17.716949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.774 11:48:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.774 spdk_app_start Round 2 00:05:22.774 11:48:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.774 11:48:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:22.774 11:48:19 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58412 ']' 00:05:22.774 11:48:19 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.774 11:48:19 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.774 11:48:19 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.774 11:48:19 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.774 11:48:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.774 11:48:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.774 11:48:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:22.774 11:48:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.774 Malloc0 00:05:22.774 11:48:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.033 Malloc1 00:05:23.033 11:48:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.033 11:48:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.291 /dev/nbd0 00:05:23.291 11:48:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.291 11:48:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.291 1+0 records in 00:05:23.291 1+0 records out 00:05:23.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181137 s, 22.6 MB/s 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.291 11:48:20 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:23.291 11:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.291 11:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.291 11:48:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.550 /dev/nbd1 00:05:23.550 11:48:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.550 11:48:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.550 1+0 records in 00:05:23.550 1+0 records out 00:05:23.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434878 s, 9.4 MB/s 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.550 11:48:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:23.550 11:48:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.550 11:48:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.550 11:48:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.550 11:48:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.550 11:48:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.807 { 00:05:23.807 "nbd_device": "/dev/nbd0", 00:05:23.807 "bdev_name": "Malloc0" 00:05:23.807 }, 00:05:23.807 { 00:05:23.807 "nbd_device": "/dev/nbd1", 00:05:23.807 "bdev_name": "Malloc1" 00:05:23.807 } 00:05:23.807 ]' 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.807 { 00:05:23.807 "nbd_device": "/dev/nbd0", 00:05:23.807 "bdev_name": "Malloc0" 00:05:23.807 }, 00:05:23.807 { 00:05:23.807 "nbd_device": "/dev/nbd1", 00:05:23.807 "bdev_name": "Malloc1" 00:05:23.807 } 00:05:23.807 ]' 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.807 /dev/nbd1' 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.807 /dev/nbd1' 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.807 11:48:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.807 256+0 records in 00:05:23.807 256+0 records out 00:05:23.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00745274 s, 141 MB/s 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.808 256+0 records in 00:05:23.808 256+0 records out 00:05:23.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178435 s, 58.8 MB/s 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.808 256+0 records in 00:05:23.808 256+0 records out 00:05:23.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229519 s, 45.7 MB/s 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.808 11:48:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.065 11:48:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.323 11:48:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.581 11:48:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.581 11:48:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.839 11:48:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.405 [2024-11-18 11:48:22.911504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.405 [2024-11-18 11:48:22.983119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.405 [2024-11-18 11:48:22.983307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.405 [2024-11-18 11:48:23.079333] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.405 [2024-11-18 11:48:23.079395] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.930 11:48:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58412 ']' 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:27.930 11:48:25 event.app_repeat -- event/event.sh@39 -- # killprocess 58412 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58412 ']' 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58412 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58412 00:05:27.930 killing process with pid 58412 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58412' 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58412 00:05:27.930 11:48:25 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58412 00:05:28.495 spdk_app_start is called in Round 0. 00:05:28.495 Shutdown signal received, stop current app iteration 00:05:28.495 Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 reinitialization... 00:05:28.495 spdk_app_start is called in Round 1. 00:05:28.495 Shutdown signal received, stop current app iteration 00:05:28.495 Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 reinitialization... 00:05:28.495 spdk_app_start is called in Round 2. 00:05:28.495 Shutdown signal received, stop current app iteration 00:05:28.495 Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 reinitialization... 00:05:28.495 spdk_app_start is called in Round 3. 00:05:28.495 Shutdown signal received, stop current app iteration 00:05:28.495 ************************************ 00:05:28.495 END TEST app_repeat 00:05:28.495 ************************************ 00:05:28.495 11:48:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:28.495 11:48:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:28.495 00:05:28.495 real 0m17.561s 00:05:28.495 user 0m38.495s 00:05:28.495 sys 0m1.988s 00:05:28.495 11:48:26 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.495 11:48:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.495 11:48:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:28.495 11:48:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.495 11:48:26 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.495 11:48:26 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.495 11:48:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.495 ************************************ 00:05:28.495 START TEST cpu_locks 00:05:28.495 ************************************ 00:05:28.495 11:48:26 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.753 * Looking for test storage... 00:05:28.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:28.753 11:48:26 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.753 11:48:26 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.753 11:48:26 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.753 11:48:26 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.753 11:48:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.753 11:48:26 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.753 11:48:26 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.754 --rc genhtml_branch_coverage=1 00:05:28.754 --rc genhtml_function_coverage=1 00:05:28.754 --rc genhtml_legend=1 00:05:28.754 --rc geninfo_all_blocks=1 00:05:28.754 --rc geninfo_unexecuted_blocks=1 00:05:28.754 00:05:28.754 ' 00:05:28.754 11:48:26 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.754 --rc genhtml_branch_coverage=1 00:05:28.754 --rc genhtml_function_coverage=1 00:05:28.754 --rc genhtml_legend=1 00:05:28.754 --rc geninfo_all_blocks=1 00:05:28.754 --rc geninfo_unexecuted_blocks=1 00:05:28.754 00:05:28.754 ' 00:05:28.754 11:48:26 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.754 --rc genhtml_branch_coverage=1 00:05:28.754 --rc genhtml_function_coverage=1 00:05:28.754 --rc genhtml_legend=1 00:05:28.754 --rc geninfo_all_blocks=1 00:05:28.754 --rc geninfo_unexecuted_blocks=1 00:05:28.754 00:05:28.754 ' 00:05:28.754 11:48:26 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.754 --rc genhtml_branch_coverage=1 00:05:28.754 --rc genhtml_function_coverage=1 00:05:28.754 --rc genhtml_legend=1 00:05:28.754 --rc geninfo_all_blocks=1 00:05:28.754 --rc geninfo_unexecuted_blocks=1 00:05:28.754 00:05:28.754 ' 00:05:28.754 11:48:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.754 11:48:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.754 11:48:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.754 11:48:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.754 11:48:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.754 11:48:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.754 11:48:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.754 ************************************ 00:05:28.754 START TEST default_locks 00:05:28.754 ************************************ 00:05:28.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58843 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58843 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58843 ']' 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.754 11:48:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.754 [2024-11-18 11:48:26.376437] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:28.754 [2024-11-18 11:48:26.376548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58843 ] 00:05:29.012 [2024-11-18 11:48:26.533409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.012 [2024-11-18 11:48:26.611894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.577 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.577 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:29.577 11:48:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58843 00:05:29.577 11:48:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.577 11:48:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58843 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58843 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58843 ']' 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58843 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58843 00:05:29.835 killing process with pid 58843 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58843' 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58843 00:05:29.835 11:48:27 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58843 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58843 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58843 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:31.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58843 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58843 ']' 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.233 ERROR: process (pid: 58843) is no longer running 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58843) - No such process 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.233 ************************************ 00:05:31.233 END TEST default_locks 00:05:31.233 ************************************ 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.233 00:05:31.233 real 0m2.237s 00:05:31.233 user 0m2.226s 00:05:31.233 sys 0m0.410s 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.233 11:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.233 11:48:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.233 11:48:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.233 11:48:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.233 11:48:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.233 ************************************ 00:05:31.233 START TEST default_locks_via_rpc 00:05:31.233 ************************************ 00:05:31.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58896 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58896 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58896 ']' 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.233 11:48:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.233 [2024-11-18 11:48:28.643575] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:31.233 [2024-11-18 11:48:28.643808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:05:31.233 [2024-11-18 11:48:28.792962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.233 [2024-11-18 11:48:28.871282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58896 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.854 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58896 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58896 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58896 ']' 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58896 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58896 00:05:32.111 killing process with pid 58896 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58896' 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58896 00:05:32.111 11:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58896 00:05:33.483 ************************************ 00:05:33.483 END TEST default_locks_via_rpc 00:05:33.483 ************************************ 00:05:33.483 00:05:33.483 real 0m2.283s 00:05:33.483 user 0m2.258s 00:05:33.483 sys 0m0.433s 00:05:33.483 11:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.483 11:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.484 11:48:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:33.484 11:48:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.484 11:48:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.484 11:48:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.484 ************************************ 00:05:33.484 START TEST non_locking_app_on_locked_coremask 00:05:33.484 ************************************ 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58948 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58948 /var/tmp/spdk.sock 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58948 ']' 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.484 11:48:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.484 [2024-11-18 11:48:30.974306] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:33.484 [2024-11-18 11:48:30.974421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58948 ] 00:05:33.484 [2024-11-18 11:48:31.121472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.742 [2024-11-18 11:48:31.202474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58964 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58964 /var/tmp/spdk2.sock 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58964 ']' 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.307 11:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.307 [2024-11-18 11:48:31.843046] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:34.307 [2024-11-18 11:48:31.843728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:05:34.565 [2024-11-18 11:48:32.007557] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.565 [2024-11-18 11:48:32.007598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.565 [2024-11-18 11:48:32.175357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.499 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.499 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:35.499 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58948 00:05:35.499 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58948 00:05:35.499 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58948 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58948 ']' 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58948 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58948 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.758 killing process with pid 58948 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58948' 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58948 00:05:35.758 11:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58948 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58964 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58964 ']' 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58964 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58964 00:05:38.286 killing process with pid 58964 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58964' 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58964 00:05:38.286 11:48:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58964 00:05:39.661 ************************************ 00:05:39.661 END TEST non_locking_app_on_locked_coremask 00:05:39.661 ************************************ 00:05:39.661 00:05:39.661 real 0m6.083s 00:05:39.661 user 0m6.286s 00:05:39.661 sys 0m0.819s 00:05:39.661 11:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.661 11:48:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.661 11:48:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:39.661 11:48:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.661 11:48:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.661 11:48:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.661 ************************************ 00:05:39.661 START TEST locking_app_on_unlocked_coremask 00:05:39.661 ************************************ 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59055 00:05:39.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59055 /var/tmp/spdk.sock 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59055 ']' 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.661 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.661 [2024-11-18 11:48:37.101551] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:39.661 [2024-11-18 11:48:37.101673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 00:05:39.661 [2024-11-18 11:48:37.260107] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.661 [2024-11-18 11:48:37.260155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.919 [2024-11-18 11:48:37.362484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59071 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59071 /var/tmp/spdk2.sock 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59071 ']' 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.486 11:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.486 [2024-11-18 11:48:38.021358] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:40.486 [2024-11-18 11:48:38.021665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59071 ] 00:05:40.743 [2024-11-18 11:48:38.195012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.743 [2024-11-18 11:48:38.393600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.114 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:42.114 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:42.114 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59071 00:05:42.114 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59071 00:05:42.114 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59055 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59055 ']' 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59055 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59055 00:05:42.372 killing process with pid 59055 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59055' 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59055 00:05:42.372 11:48:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59055 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59071 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59071 ']' 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59071 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59071 00:05:44.901 killing process with pid 59071 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59071' 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59071 00:05:44.901 11:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59071 00:05:45.837 00:05:45.837 real 0m6.400s 00:05:45.837 user 0m6.660s 00:05:45.837 sys 0m0.812s 00:05:45.837 ************************************ 00:05:45.837 END TEST locking_app_on_unlocked_coremask 00:05:45.837 ************************************ 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.837 11:48:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:45.837 11:48:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.837 11:48:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.837 11:48:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.837 ************************************ 00:05:45.837 START TEST locking_app_on_locked_coremask 00:05:45.837 ************************************ 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59173 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59173 /var/tmp/spdk.sock 00:05:45.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59173 ']' 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.837 11:48:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.095 [2024-11-18 11:48:43.535932] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:46.095 [2024-11-18 11:48:43.536038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:05:46.095 [2024-11-18 11:48:43.690028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.096 [2024-11-18 11:48:43.766462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.030 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.030 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:47.030 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59184 00:05:47.030 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.030 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59184 /var/tmp/spdk2.sock 00:05:47.030 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:47.030 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59184 /var/tmp/spdk2.sock 00:05:47.030 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59184 /var/tmp/spdk2.sock 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59184 ']' 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.031 11:48:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.031 [2024-11-18 11:48:44.437790] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:47.031 [2024-11-18 11:48:44.438182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59184 ] 00:05:47.031 [2024-11-18 11:48:44.600464] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59173 has claimed it. 00:05:47.031 [2024-11-18 11:48:44.600509] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.594 ERROR: process (pid: 59184) is no longer running 00:05:47.594 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59184) - No such process 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59173 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59173 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59173 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59173 ']' 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59173 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59173 00:05:47.594 killing process with pid 59173 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59173' 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59173 00:05:47.594 11:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59173 00:05:48.965 00:05:48.965 real 0m2.963s 00:05:48.965 user 0m3.207s 00:05:48.965 sys 0m0.498s 00:05:48.965 ************************************ 00:05:48.965 END TEST locking_app_on_locked_coremask 00:05:48.965 ************************************ 00:05:48.965 11:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.965 11:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.965 11:48:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:48.965 11:48:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.965 11:48:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.965 11:48:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.965 ************************************ 00:05:48.965 START TEST locking_overlapped_coremask 00:05:48.965 ************************************ 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:48.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59237 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59237 /var/tmp/spdk.sock 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59237 ']' 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.965 11:48:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.965 [2024-11-18 11:48:46.534701] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:48.965 [2024-11-18 11:48:46.534819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:05:49.224 [2024-11-18 11:48:46.689778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.224 [2024-11-18 11:48:46.768650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.224 [2024-11-18 11:48:46.768783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.224 [2024-11-18 11:48:46.768890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59255 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59255 /var/tmp/spdk2.sock 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59255 /var/tmp/spdk2.sock 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.789 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59255 /var/tmp/spdk2.sock 00:05:49.790 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59255 ']' 00:05:49.790 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.790 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:49.790 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.790 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:49.790 11:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.790 [2024-11-18 11:48:47.434881] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:49.790 [2024-11-18 11:48:47.435409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59255 ] 00:05:50.047 [2024-11-18 11:48:47.613241] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59237 has claimed it. 00:05:50.047 [2024-11-18 11:48:47.613294] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.615 ERROR: process (pid: 59255) is no longer running 00:05:50.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59255) - No such process 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59237 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59237 ']' 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59237 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59237 00:05:50.615 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:50.616 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:50.616 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59237' 00:05:50.616 killing process with pid 59237 00:05:50.616 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59237 00:05:50.616 11:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59237 00:05:51.988 00:05:51.988 real 0m2.816s 00:05:51.988 user 0m7.702s 00:05:51.988 sys 0m0.414s 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.988 ************************************ 00:05:51.988 END TEST locking_overlapped_coremask 00:05:51.988 ************************************ 00:05:51.988 11:48:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:51.988 11:48:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.988 11:48:49 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.988 11:48:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.988 ************************************ 00:05:51.988 START TEST locking_overlapped_coremask_via_rpc 00:05:51.988 ************************************ 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59303 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59303 /var/tmp/spdk.sock 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59303 ']' 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:51.988 11:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.988 [2024-11-18 11:48:49.394096] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:51.988 [2024-11-18 11:48:49.394402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59303 ] 00:05:51.988 [2024-11-18 11:48:49.583464] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.988 [2024-11-18 11:48:49.583637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.246 [2024-11-18 11:48:49.704538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.246 [2024-11-18 11:48:49.704628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.246 [2024-11-18 11:48:49.704608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59320 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59320 /var/tmp/spdk2.sock 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59320 ']' 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.812 11:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.812 [2024-11-18 11:48:50.328828] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:52.812 [2024-11-18 11:48:50.329191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59320 ] 00:05:52.812 [2024-11-18 11:48:50.510055] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.812 [2024-11-18 11:48:50.510105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.071 [2024-11-18 11:48:50.723316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.071 [2024-11-18 11:48:50.723377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.071 [2024-11-18 11:48:50.723402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.511 [2024-11-18 11:48:51.797703] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59303 has claimed it. 00:05:54.511 request: 00:05:54.511 { 00:05:54.511 "method": "framework_enable_cpumask_locks", 00:05:54.511 "req_id": 1 00:05:54.511 } 00:05:54.511 Got JSON-RPC error response 00:05:54.511 response: 00:05:54.511 { 00:05:54.511 "code": -32603, 00:05:54.511 "message": "Failed to claim CPU core: 2" 00:05:54.511 } 00:05:54.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59303 /var/tmp/spdk.sock 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59303 ']' 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.511 11:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59320 /var/tmp/spdk2.sock 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59320 ']' 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.511 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.769 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.769 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:54.769 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:54.769 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.769 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.770 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.770 00:05:54.770 real 0m2.912s 00:05:54.770 user 0m1.082s 00:05:54.770 sys 0m0.120s 00:05:54.770 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.770 11:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.770 ************************************ 00:05:54.770 END TEST locking_overlapped_coremask_via_rpc 00:05:54.770 ************************************ 00:05:54.770 11:48:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:54.770 11:48:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59303 ]] 00:05:54.770 11:48:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59303 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59303 ']' 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59303 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59303 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59303' 00:05:54.770 killing process with pid 59303 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59303 00:05:54.770 11:48:52 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59303 00:05:56.144 11:48:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59320 ]] 00:05:56.144 11:48:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59320 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59320 ']' 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59320 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59320 00:05:56.144 killing process with pid 59320 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59320' 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59320 00:05:56.144 11:48:53 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59320 00:05:57.079 11:48:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.079 11:48:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:57.079 11:48:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59303 ]] 00:05:57.079 11:48:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59303 00:05:57.079 11:48:54 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59303 ']' 00:05:57.079 11:48:54 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59303 00:05:57.079 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59303) - No such process 00:05:57.079 Process with pid 59303 is not found 00:05:57.079 11:48:54 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59303 is not found' 00:05:57.079 11:48:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59320 ]] 00:05:57.079 11:48:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59320 00:05:57.079 11:48:54 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59320 ']' 00:05:57.079 Process with pid 59320 is not found 00:05:57.079 11:48:54 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59320 00:05:57.079 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59320) - No such process 00:05:57.079 11:48:54 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59320 is not found' 00:05:57.079 11:48:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.079 00:05:57.079 real 0m28.560s 00:05:57.079 user 0m49.484s 00:05:57.079 sys 0m4.306s 00:05:57.079 11:48:54 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.079 11:48:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.079 ************************************ 00:05:57.079 END TEST cpu_locks 00:05:57.079 ************************************ 00:05:57.079 ************************************ 00:05:57.079 END TEST event 00:05:57.079 ************************************ 00:05:57.079 00:05:57.079 real 0m53.341s 00:05:57.079 user 1m39.046s 00:05:57.079 sys 0m7.080s 00:05:57.079 11:48:54 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.079 11:48:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.338 11:48:54 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:57.338 11:48:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.338 11:48:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.338 11:48:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.338 ************************************ 00:05:57.338 START TEST thread 00:05:57.338 ************************************ 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:57.338 * Looking for test storage... 00:05:57.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.338 11:48:54 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.338 11:48:54 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.338 11:48:54 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.338 11:48:54 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.338 11:48:54 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.338 11:48:54 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.338 11:48:54 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.338 11:48:54 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.338 11:48:54 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.338 11:48:54 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.338 11:48:54 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.338 11:48:54 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:57.338 11:48:54 thread -- scripts/common.sh@345 -- # : 1 00:05:57.338 11:48:54 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.338 11:48:54 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.338 11:48:54 thread -- scripts/common.sh@365 -- # decimal 1 00:05:57.338 11:48:54 thread -- scripts/common.sh@353 -- # local d=1 00:05:57.338 11:48:54 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.338 11:48:54 thread -- scripts/common.sh@355 -- # echo 1 00:05:57.338 11:48:54 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.338 11:48:54 thread -- scripts/common.sh@366 -- # decimal 2 00:05:57.338 11:48:54 thread -- scripts/common.sh@353 -- # local d=2 00:05:57.338 11:48:54 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.338 11:48:54 thread -- scripts/common.sh@355 -- # echo 2 00:05:57.338 11:48:54 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.338 11:48:54 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.338 11:48:54 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.338 11:48:54 thread -- scripts/common.sh@368 -- # return 0 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.338 ' 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.338 ' 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.338 ' 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.338 ' 00:05:57.338 11:48:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.338 11:48:54 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.338 ************************************ 00:05:57.338 START TEST thread_poller_perf 00:05:57.338 ************************************ 00:05:57.338 11:48:54 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.338 [2024-11-18 11:48:54.954343] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:57.338 [2024-11-18 11:48:54.954567] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59480 ] 00:05:57.597 [2024-11-18 11:48:55.105468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.597 [2024-11-18 11:48:55.203039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.597 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:58.973 [2024-11-18T11:48:56.674Z] ====================================== 00:05:58.973 [2024-11-18T11:48:56.674Z] busy:2611885960 (cyc) 00:05:58.973 [2024-11-18T11:48:56.674Z] total_run_count: 305000 00:05:58.973 [2024-11-18T11:48:56.674Z] tsc_hz: 2600000000 (cyc) 00:05:58.973 [2024-11-18T11:48:56.674Z] ====================================== 00:05:58.973 [2024-11-18T11:48:56.674Z] poller_cost: 8563 (cyc), 3293 (nsec) 00:05:58.973 00:05:58.973 real 0m1.445s 00:05:58.973 user 0m1.264s 00:05:58.973 sys 0m0.074s 00:05:58.973 11:48:56 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.973 ************************************ 00:05:58.973 END TEST thread_poller_perf 00:05:58.973 ************************************ 00:05:58.973 11:48:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.973 11:48:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.973 11:48:56 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:58.973 11:48:56 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.973 11:48:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.973 ************************************ 00:05:58.973 START TEST thread_poller_perf 00:05:58.973 ************************************ 00:05:58.973 11:48:56 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.973 [2024-11-18 11:48:56.439527] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:05:58.973 [2024-11-18 11:48:56.439698] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:05:58.973 [2024-11-18 11:48:56.610688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.232 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:59.232 [2024-11-18 11:48:56.709623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.166 [2024-11-18T11:48:57.867Z] ====================================== 00:06:00.166 [2024-11-18T11:48:57.867Z] busy:2603546934 (cyc) 00:06:00.166 [2024-11-18T11:48:57.867Z] total_run_count: 3948000 00:06:00.166 [2024-11-18T11:48:57.867Z] tsc_hz: 2600000000 (cyc) 00:06:00.166 [2024-11-18T11:48:57.867Z] ====================================== 00:06:00.166 [2024-11-18T11:48:57.867Z] poller_cost: 659 (cyc), 253 (nsec) 00:06:00.166 00:06:00.166 real 0m1.423s 00:06:00.166 user 0m1.243s 00:06:00.166 sys 0m0.074s 00:06:00.166 ************************************ 00:06:00.166 END TEST thread_poller_perf 00:06:00.166 ************************************ 00:06:00.166 11:48:57 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.166 11:48:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.425 11:48:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:00.425 ************************************ 00:06:00.425 END TEST thread 00:06:00.425 ************************************ 00:06:00.425 00:06:00.425 real 0m3.075s 00:06:00.425 user 0m2.603s 00:06:00.425 sys 0m0.259s 00:06:00.425 11:48:57 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.425 11:48:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.425 11:48:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:00.425 11:48:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:00.425 11:48:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.425 11:48:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.425 11:48:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.425 ************************************ 00:06:00.425 START TEST app_cmdline 00:06:00.425 ************************************ 00:06:00.425 11:48:57 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:00.425 * Looking for test storage... 00:06:00.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:00.425 11:48:57 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.425 11:48:57 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.425 11:48:57 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.425 11:48:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.425 --rc genhtml_branch_coverage=1 00:06:00.425 --rc genhtml_function_coverage=1 00:06:00.425 --rc genhtml_legend=1 00:06:00.425 --rc geninfo_all_blocks=1 00:06:00.425 --rc geninfo_unexecuted_blocks=1 00:06:00.425 00:06:00.425 ' 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.425 --rc genhtml_branch_coverage=1 00:06:00.425 --rc genhtml_function_coverage=1 00:06:00.425 --rc genhtml_legend=1 00:06:00.425 --rc geninfo_all_blocks=1 00:06:00.425 --rc geninfo_unexecuted_blocks=1 00:06:00.425 00:06:00.425 ' 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.425 --rc genhtml_branch_coverage=1 00:06:00.425 --rc genhtml_function_coverage=1 00:06:00.425 --rc genhtml_legend=1 00:06:00.425 --rc geninfo_all_blocks=1 00:06:00.425 --rc geninfo_unexecuted_blocks=1 00:06:00.425 00:06:00.425 ' 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.425 --rc genhtml_branch_coverage=1 00:06:00.425 --rc genhtml_function_coverage=1 00:06:00.425 --rc genhtml_legend=1 00:06:00.425 --rc geninfo_all_blocks=1 00:06:00.425 --rc geninfo_unexecuted_blocks=1 00:06:00.425 00:06:00.425 ' 00:06:00.425 11:48:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:00.425 11:48:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59600 00:06:00.425 11:48:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59600 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59600 ']' 00:06:00.425 11:48:58 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.425 11:48:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.425 [2024-11-18 11:48:58.091868] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:00.425 [2024-11-18 11:48:58.092124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59600 ] 00:06:00.684 [2024-11-18 11:48:58.242172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.684 [2024-11-18 11:48:58.322460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.250 11:48:58 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.251 11:48:58 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:01.251 11:48:58 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:01.509 { 00:06:01.509 "version": "SPDK v25.01-pre git sha1 403bf887a", 00:06:01.509 "fields": { 00:06:01.509 "major": 25, 00:06:01.509 "minor": 1, 00:06:01.509 "patch": 0, 00:06:01.509 "suffix": "-pre", 00:06:01.509 "commit": "403bf887a" 00:06:01.509 } 00:06:01.509 } 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:01.509 11:48:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:01.509 11:48:59 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.767 request: 00:06:01.767 { 00:06:01.767 "method": "env_dpdk_get_mem_stats", 00:06:01.767 "req_id": 1 00:06:01.767 } 00:06:01.767 Got JSON-RPC error response 00:06:01.767 response: 00:06:01.767 { 00:06:01.767 "code": -32601, 00:06:01.767 "message": "Method not found" 00:06:01.767 } 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.767 11:48:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59600 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59600 ']' 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59600 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59600 00:06:01.767 killing process with pid 59600 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59600' 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@971 -- # kill 59600 00:06:01.767 11:48:59 app_cmdline -- common/autotest_common.sh@976 -- # wait 59600 00:06:03.143 00:06:03.143 real 0m2.615s 00:06:03.143 user 0m2.927s 00:06:03.143 sys 0m0.376s 00:06:03.143 11:49:00 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.143 ************************************ 00:06:03.143 END TEST app_cmdline 00:06:03.143 ************************************ 00:06:03.143 11:49:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.143 11:49:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:03.143 11:49:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.143 11:49:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.143 11:49:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.143 ************************************ 00:06:03.143 START TEST version 00:06:03.143 ************************************ 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:03.143 * Looking for test storage... 00:06:03.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:03.143 11:49:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.143 11:49:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.143 11:49:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.143 11:49:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.143 11:49:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.143 11:49:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.143 11:49:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.143 11:49:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.143 11:49:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.143 11:49:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.143 11:49:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.143 11:49:00 version -- scripts/common.sh@344 -- # case "$op" in 00:06:03.143 11:49:00 version -- scripts/common.sh@345 -- # : 1 00:06:03.143 11:49:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.143 11:49:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.143 11:49:00 version -- scripts/common.sh@365 -- # decimal 1 00:06:03.143 11:49:00 version -- scripts/common.sh@353 -- # local d=1 00:06:03.143 11:49:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.143 11:49:00 version -- scripts/common.sh@355 -- # echo 1 00:06:03.143 11:49:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.143 11:49:00 version -- scripts/common.sh@366 -- # decimal 2 00:06:03.143 11:49:00 version -- scripts/common.sh@353 -- # local d=2 00:06:03.143 11:49:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.143 11:49:00 version -- scripts/common.sh@355 -- # echo 2 00:06:03.143 11:49:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.143 11:49:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.143 11:49:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.143 11:49:00 version -- scripts/common.sh@368 -- # return 0 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.143 --rc genhtml_branch_coverage=1 00:06:03.143 --rc genhtml_function_coverage=1 00:06:03.143 --rc genhtml_legend=1 00:06:03.143 --rc geninfo_all_blocks=1 00:06:03.143 --rc geninfo_unexecuted_blocks=1 00:06:03.143 00:06:03.143 ' 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.143 --rc genhtml_branch_coverage=1 00:06:03.143 --rc genhtml_function_coverage=1 00:06:03.143 --rc genhtml_legend=1 00:06:03.143 --rc geninfo_all_blocks=1 00:06:03.143 --rc geninfo_unexecuted_blocks=1 00:06:03.143 00:06:03.143 ' 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.143 --rc genhtml_branch_coverage=1 00:06:03.143 --rc genhtml_function_coverage=1 00:06:03.143 --rc genhtml_legend=1 00:06:03.143 --rc geninfo_all_blocks=1 00:06:03.143 --rc geninfo_unexecuted_blocks=1 00:06:03.143 00:06:03.143 ' 00:06:03.143 11:49:00 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.143 --rc genhtml_branch_coverage=1 00:06:03.143 --rc genhtml_function_coverage=1 00:06:03.143 --rc genhtml_legend=1 00:06:03.143 --rc geninfo_all_blocks=1 00:06:03.143 --rc geninfo_unexecuted_blocks=1 00:06:03.143 00:06:03.143 ' 00:06:03.143 11:49:00 version -- app/version.sh@17 -- # get_header_version major 00:06:03.143 11:49:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:03.143 11:49:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.143 11:49:00 version -- app/version.sh@14 -- # cut -f2 00:06:03.143 11:49:00 version -- app/version.sh@17 -- # major=25 00:06:03.143 11:49:00 version -- app/version.sh@18 -- # get_header_version minor 00:06:03.143 11:49:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:03.143 11:49:00 version -- app/version.sh@14 -- # cut -f2 00:06:03.143 11:49:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.143 11:49:00 version -- app/version.sh@18 -- # minor=1 00:06:03.143 11:49:00 version -- app/version.sh@19 -- # get_header_version patch 00:06:03.143 11:49:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:03.143 11:49:00 version -- app/version.sh@14 -- # cut -f2 00:06:03.143 11:49:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.143 11:49:00 version -- app/version.sh@19 -- # patch=0 00:06:03.143 11:49:00 version -- app/version.sh@20 -- # get_header_version suffix 00:06:03.143 11:49:00 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.143 11:49:00 version -- app/version.sh@14 -- # cut -f2 00:06:03.143 11:49:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:03.143 11:49:00 version -- app/version.sh@20 -- # suffix=-pre 00:06:03.143 11:49:00 version -- app/version.sh@22 -- # version=25.1 00:06:03.143 11:49:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:03.143 11:49:00 version -- app/version.sh@28 -- # version=25.1rc0 00:06:03.143 11:49:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:03.144 11:49:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:03.144 11:49:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:03.144 11:49:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:03.144 00:06:03.144 real 0m0.191s 00:06:03.144 user 0m0.117s 00:06:03.144 sys 0m0.101s 00:06:03.144 ************************************ 00:06:03.144 END TEST version 00:06:03.144 ************************************ 00:06:03.144 11:49:00 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.144 11:49:00 version -- common/autotest_common.sh@10 -- # set +x 00:06:03.144 11:49:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:03.144 11:49:00 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:03.144 11:49:00 -- spdk/autotest.sh@194 -- # uname -s 00:06:03.144 11:49:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:03.144 11:49:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:03.144 11:49:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:03.144 11:49:00 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:03.144 11:49:00 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:03.144 11:49:00 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:03.144 11:49:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.144 11:49:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.144 ************************************ 00:06:03.144 START TEST blockdev_nvme 00:06:03.144 ************************************ 00:06:03.144 11:49:00 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:03.403 * Looking for test storage... 00:06:03.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.403 11:49:00 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.403 --rc genhtml_branch_coverage=1 00:06:03.403 --rc genhtml_function_coverage=1 00:06:03.403 --rc genhtml_legend=1 00:06:03.403 --rc geninfo_all_blocks=1 00:06:03.403 --rc geninfo_unexecuted_blocks=1 00:06:03.403 00:06:03.403 ' 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.403 --rc genhtml_branch_coverage=1 00:06:03.403 --rc genhtml_function_coverage=1 00:06:03.403 --rc genhtml_legend=1 00:06:03.403 --rc geninfo_all_blocks=1 00:06:03.403 --rc geninfo_unexecuted_blocks=1 00:06:03.403 00:06:03.403 ' 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.403 --rc genhtml_branch_coverage=1 00:06:03.403 --rc genhtml_function_coverage=1 00:06:03.403 --rc genhtml_legend=1 00:06:03.403 --rc geninfo_all_blocks=1 00:06:03.403 --rc geninfo_unexecuted_blocks=1 00:06:03.403 00:06:03.403 ' 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.403 --rc genhtml_branch_coverage=1 00:06:03.403 --rc genhtml_function_coverage=1 00:06:03.403 --rc genhtml_legend=1 00:06:03.403 --rc geninfo_all_blocks=1 00:06:03.403 --rc geninfo_unexecuted_blocks=1 00:06:03.403 00:06:03.403 ' 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:03.403 11:49:00 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59767 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59767 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59767 ']' 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.403 11:49:00 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.403 11:49:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.403 [2024-11-18 11:49:01.014115] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:03.403 [2024-11-18 11:49:01.014358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59767 ] 00:06:03.661 [2024-11-18 11:49:01.168284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.661 [2024-11-18 11:49:01.245360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.282 11:49:01 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.282 11:49:01 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:06:04.282 11:49:01 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:04.282 11:49:01 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:04.282 11:49:01 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:04.282 11:49:01 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:04.282 11:49:01 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:04.282 11:49:01 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:04.282 11:49:01 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.282 11:49:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.539 11:49:02 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.539 11:49:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:04.539 11:49:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.539 11:49:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.539 11:49:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.539 11:49:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:04.539 11:49:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.539 11:49:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.539 11:49:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.798 11:49:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:04.798 11:49:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:04.799 11:49:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "04f917f3-2eee-44b7-a63f-0bc56efb9481"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "04f917f3-2eee-44b7-a63f-0bc56efb9481",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "482829ad-b8bb-489f-8a81-6b848ca28d0a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "482829ad-b8bb-489f-8a81-6b848ca28d0a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2a86b41c-31b5-46a3-97b0-736f526d25d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a86b41c-31b5-46a3-97b0-736f526d25d8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d77b88e2-eaa6-41ac-b4ee-71fefe63fc5f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d77b88e2-eaa6-41ac-b4ee-71fefe63fc5f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "bb7a58a2-c359-4154-b4f2-c44062f02b5c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb7a58a2-c359-4154-b4f2-c44062f02b5c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f4892e59-b260-499b-8092-33380394f6ba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f4892e59-b260-499b-8092-33380394f6ba",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:04.799 11:49:02 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:04.799 11:49:02 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:04.799 11:49:02 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:04.799 11:49:02 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59767 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59767 ']' 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59767 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59767 00:06:04.799 killing process with pid 59767 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59767' 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59767 00:06:04.799 11:49:02 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59767 00:06:06.173 11:49:03 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:06.173 11:49:03 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:06.173 11:49:03 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:06.173 11:49:03 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.173 11:49:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.173 ************************************ 00:06:06.173 START TEST bdev_hello_world 00:06:06.173 ************************************ 00:06:06.173 11:49:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:06.173 [2024-11-18 11:49:03.521903] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:06.173 [2024-11-18 11:49:03.522143] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59845 ] 00:06:06.173 [2024-11-18 11:49:03.678093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.173 [2024-11-18 11:49:03.757485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.739 [2024-11-18 11:49:04.253657] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:06.739 [2024-11-18 11:49:04.253698] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:06.739 [2024-11-18 11:49:04.253719] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:06.739 [2024-11-18 11:49:04.256185] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:06.739 [2024-11-18 11:49:04.256637] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:06.739 [2024-11-18 11:49:04.256763] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:06.739 [2024-11-18 11:49:04.256998] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:06.739 00:06:06.739 [2024-11-18 11:49:04.257023] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:07.306 00:06:07.306 real 0m1.500s 00:06:07.306 user 0m1.217s 00:06:07.306 sys 0m0.176s 00:06:07.306 ************************************ 00:06:07.306 END TEST bdev_hello_world 00:06:07.306 ************************************ 00:06:07.306 11:49:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.306 11:49:04 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:07.306 11:49:04 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:07.306 11:49:04 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:07.306 11:49:04 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.306 11:49:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.306 ************************************ 00:06:07.306 START TEST bdev_bounds 00:06:07.306 ************************************ 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59882 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.306 Process bdevio pid: 59882 00:06:07.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59882' 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59882 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 59882 ']' 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:07.306 11:49:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:07.564 [2024-11-18 11:49:05.048762] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:07.564 [2024-11-18 11:49:05.048963] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59882 ] 00:06:07.564 [2024-11-18 11:49:05.198750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.821 [2024-11-18 11:49:05.296043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.821 [2024-11-18 11:49:05.296266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.821 [2024-11-18 11:49:05.296300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.387 11:49:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.388 11:49:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:08.388 11:49:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:08.388 I/O targets: 00:06:08.388 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:08.388 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:08.388 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:08.388 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:08.388 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:08.388 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:08.388 00:06:08.388 00:06:08.388 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.388 http://cunit.sourceforge.net/ 00:06:08.388 00:06:08.388 00:06:08.388 Suite: bdevio tests on: Nvme3n1 00:06:08.388 Test: blockdev write read block ...passed 00:06:08.388 Test: blockdev write zeroes read block ...passed 00:06:08.388 Test: blockdev write zeroes read no split ...passed 00:06:08.388 Test: blockdev write zeroes read split ...passed 00:06:08.388 Test: blockdev write zeroes read split partial ...passed 00:06:08.388 Test: blockdev reset ...[2024-11-18 11:49:06.044543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:08.388 [2024-11-18 11:49:06.048055] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:08.388 Test: blockdev write read 8 blocks ...uccessful. 00:06:08.388 passed 00:06:08.388 Test: blockdev write read size > 128k ...passed 00:06:08.388 Test: blockdev write read invalid size ...passed 00:06:08.388 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.388 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.388 Test: blockdev write read max offset ...passed 00:06:08.388 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.388 Test: blockdev writev readv 8 blocks ...passed 00:06:08.388 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.388 Test: blockdev writev readv block ...passed 00:06:08.388 Test: blockdev writev readv size > 128k ...passed 00:06:08.388 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.388 Test: blockdev comparev and writev ...[2024-11-18 11:49:06.056017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b980a000 len:0x1000 00:06:08.388 [2024-11-18 11:49:06.056172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.388 passed 00:06:08.388 Test: blockdev nvme passthru rw ...passed 00:06:08.388 Test: blockdev nvme passthru vendor specific ...[2024-11-18 11:49:06.057373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:08.388 [2024-11-18 11:49:06.057398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:08.388 passed 00:06:08.388 Test: blockdev nvme admin passthru ...passed 00:06:08.388 Test: blockdev copy ...passed 00:06:08.388 Suite: bdevio tests on: Nvme2n3 00:06:08.388 Test: blockdev write read block ...passed 00:06:08.388 Test: blockdev write zeroes read block ...passed 00:06:08.388 Test: blockdev write zeroes read no split ...passed 00:06:08.646 Test: blockdev write zeroes read split ...passed 00:06:08.646 Test: blockdev write zeroes read split partial ...passed 00:06:08.646 Test: blockdev reset ...[2024-11-18 11:49:06.114617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:08.646 passed 00:06:08.646 Test: blockdev write read 8 blocks ...[2024-11-18 11:49:06.117602] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:08.646 passed 00:06:08.646 Test: blockdev write read size > 128k ...passed 00:06:08.646 Test: blockdev write read invalid size ...passed 00:06:08.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.646 Test: blockdev write read max offset ...passed 00:06:08.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.646 Test: blockdev writev readv 8 blocks ...passed 00:06:08.646 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.646 Test: blockdev writev readv block ...passed 00:06:08.646 Test: blockdev writev readv size > 128k ...passed 00:06:08.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.646 Test: blockdev comparev and writev ...[2024-11-18 11:49:06.123135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:08.646 Test: blockdev nvme passthru rw ...passed 00:06:08.646 Test: blockdev nvme passthru vendor specific ...passed 00:06:08.646 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x29d206000 len:0x1000 00:06:08.646 [2024-11-18 11:49:06.123250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.646 [2024-11-18 11:49:06.123628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:08.646 [2024-11-18 11:49:06.123652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:08.646 passed 00:06:08.646 Test: blockdev copy ...passed 00:06:08.646 Suite: bdevio tests on: Nvme2n2 00:06:08.646 Test: blockdev write read block ...passed 00:06:08.646 Test: blockdev write zeroes read block ...passed 00:06:08.646 Test: blockdev write zeroes read no split ...passed 00:06:08.646 Test: blockdev write zeroes read split ...passed 00:06:08.646 Test: blockdev write zeroes read split partial ...passed 00:06:08.646 Test: blockdev reset ...[2024-11-18 11:49:06.168234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:08.646 [2024-11-18 11:49:06.173039] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:08.646 passed 00:06:08.646 Test: blockdev write read 8 blocks ...passed 00:06:08.646 Test: blockdev write read size > 128k ...passed 00:06:08.646 Test: blockdev write read invalid size ...passed 00:06:08.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.646 Test: blockdev write read max offset ...passed 00:06:08.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.646 Test: blockdev writev readv 8 blocks ...passed 00:06:08.646 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.646 Test: blockdev writev readv block ...passed 00:06:08.646 Test: blockdev writev readv size > 128k ...passed 00:06:08.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.646 Test: blockdev comparev and writev ...[2024-11-18 11:49:06.182059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f103c000 len:0x1000 00:06:08.646 [2024-11-18 11:49:06.182179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.646 passed 00:06:08.646 Test: blockdev nvme passthru rw ...passed 00:06:08.646 Test: blockdev nvme passthru vendor specific ...passed 00:06:08.646 Test: blockdev nvme admin passthru ...[2024-11-18 11:49:06.183023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:08.646 [2024-11-18 11:49:06.183048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:08.646 passed 00:06:08.646 Test: blockdev copy ...passed 00:06:08.646 Suite: bdevio tests on: Nvme2n1 00:06:08.646 Test: blockdev write read block ...passed 00:06:08.646 Test: blockdev write zeroes read block ...passed 00:06:08.646 Test: blockdev write zeroes read no split ...passed 00:06:08.646 Test: blockdev write zeroes read split ...passed 00:06:08.646 Test: blockdev write zeroes read split partial ...passed 00:06:08.646 Test: blockdev reset ...[2024-11-18 11:49:06.231809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:08.646 [2024-11-18 11:49:06.236622] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:08.646 00:06:08.646 Test: blockdev write read 8 blocks ...passed 00:06:08.646 Test: blockdev write read size > 128k ...passed 00:06:08.646 Test: blockdev write read invalid size ...passed 00:06:08.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.646 Test: blockdev write read max offset ...passed 00:06:08.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.646 Test: blockdev writev readv 8 blocks ...passed 00:06:08.646 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.646 Test: blockdev writev readv block ...passed 00:06:08.646 Test: blockdev writev readv size > 128k ...passed 00:06:08.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.646 Test: blockdev comparev and writev ...[2024-11-18 11:49:06.247262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f1038000 len:0x1000 00:06:08.646 [2024-11-18 11:49:06.247305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.646 passed 00:06:08.646 Test: blockdev nvme passthru rw ...passed 00:06:08.646 Test: blockdev nvme passthru vendor specific ...[2024-11-18 11:49:06.247919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:08.646 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:08.646 [2024-11-18 11:49:06.248045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:08.646 passed 00:06:08.646 Test: blockdev copy ...passed 00:06:08.646 Suite: bdevio tests on: Nvme1n1 00:06:08.646 Test: blockdev write read block ...passed 00:06:08.646 Test: blockdev write zeroes read block ...passed 00:06:08.646 Test: blockdev write zeroes read no split ...passed 00:06:08.646 Test: blockdev write zeroes read split ...passed 00:06:08.646 Test: blockdev write zeroes read split partial ...passed 00:06:08.646 Test: blockdev reset ...[2024-11-18 11:49:06.297079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:08.646 [2024-11-18 11:49:06.299847] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:06:08.646 00:06:08.646 Test: blockdev write read 8 blocks ...passed 00:06:08.646 Test: blockdev write read size > 128k ...passed 00:06:08.646 Test: blockdev write read invalid size ...passed 00:06:08.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.647 Test: blockdev write read max offset ...passed 00:06:08.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.647 Test: blockdev writev readv 8 blocks ...passed 00:06:08.647 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.647 Test: blockdev writev readv block ...passed 00:06:08.647 Test: blockdev writev readv size > 128k ...passed 00:06:08.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.647 Test: blockdev comparev and writev ...[2024-11-18 11:49:06.309816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f1034000 len:0x1000 00:06:08.647 [2024-11-18 11:49:06.309943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:08.647 passed 00:06:08.647 Test: blockdev nvme passthru rw ...passed 00:06:08.647 Test: blockdev nvme passthru vendor specific ...[2024-11-18 11:49:06.311077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:08.647 [2024-11-18 11:49:06.311187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:08.647 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:08.647 passed 00:06:08.647 Test: blockdev copy ...passed 00:06:08.647 Suite: bdevio tests on: Nvme0n1 00:06:08.647 Test: blockdev write read block ...passed 00:06:08.647 Test: blockdev write zeroes read block ...passed 00:06:08.647 Test: blockdev write zeroes read no split ...passed 00:06:08.905 Test: blockdev write zeroes read split ...passed 00:06:08.905 Test: blockdev write zeroes read split partial ...passed 00:06:08.905 Test: blockdev reset ...[2024-11-18 11:49:06.365596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:08.905 [2024-11-18 11:49:06.368384] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:08.905 Test: blockdev write read 8 blocks ...uccessful. 00:06:08.905 passed 00:06:08.905 Test: blockdev write read size > 128k ...passed 00:06:08.905 Test: blockdev write read invalid size ...passed 00:06:08.905 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:08.905 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:08.905 Test: blockdev write read max offset ...passed 00:06:08.905 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:08.905 Test: blockdev writev readv 8 blocks ...passed 00:06:08.905 Test: blockdev writev readv 30 x 1block ...passed 00:06:08.905 Test: blockdev writev readv block ...passed 00:06:08.905 Test: blockdev writev readv size > 128k ...passed 00:06:08.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:08.905 Test: blockdev comparev and writev ...passed 00:06:08.905 Test: blockdev nvme passthru rw ...[2024-11-18 11:49:06.378301] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:08.905 separate metadata which is not supported yet. 00:06:08.905 passed 00:06:08.905 Test: blockdev nvme passthru vendor specific ...passed 00:06:08.905 Test: blockdev nvme admin passthru ...[2024-11-18 11:49:06.378989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:08.905 [2024-11-18 11:49:06.379025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:08.905 passed 00:06:08.905 Test: blockdev copy ...passed 00:06:08.905 00:06:08.905 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.905 suites 6 6 n/a 0 0 00:06:08.905 tests 138 138 138 0 0 00:06:08.905 asserts 893 893 893 0 n/a 00:06:08.905 00:06:08.905 Elapsed time = 1.012 seconds 00:06:08.905 0 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59882 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 59882 ']' 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 59882 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59882 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59882' 00:06:08.905 killing process with pid 59882 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 59882 00:06:08.905 11:49:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 59882 00:06:09.470 11:49:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:09.470 00:06:09.470 real 0m2.125s 00:06:09.470 user 0m5.469s 00:06:09.470 sys 0m0.271s 00:06:09.470 11:49:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.470 11:49:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:09.470 ************************************ 00:06:09.470 END TEST bdev_bounds 00:06:09.470 ************************************ 00:06:09.470 11:49:07 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:09.470 11:49:07 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:09.470 11:49:07 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.470 11:49:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.470 ************************************ 00:06:09.470 START TEST bdev_nbd 00:06:09.470 ************************************ 00:06:09.470 11:49:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:09.470 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:09.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59940 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59940 /var/tmp/spdk-nbd.sock 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 59940 ']' 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.728 11:49:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:09.728 [2024-11-18 11:49:07.235089] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:09.728 [2024-11-18 11:49:07.235307] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.728 [2024-11-18 11:49:07.393064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.985 [2024-11-18 11:49:07.506121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.551 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.551 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:10.551 11:49:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:10.551 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.551 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.551 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:10.551 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:10.552 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.552 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.552 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:10.552 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:10.552 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:10.552 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:10.552 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:10.552 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.810 1+0 records in 00:06:10.810 1+0 records out 00:06:10.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288631 s, 14.2 MB/s 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:10.810 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.811 1+0 records in 00:06:10.811 1+0 records out 00:06:10.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356025 s, 11.5 MB/s 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:10.811 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.069 1+0 records in 00:06:11.069 1+0 records out 00:06:11.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346872 s, 11.8 MB/s 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.069 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.328 1+0 records in 00:06:11.328 1+0 records out 00:06:11.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340906 s, 12.0 MB/s 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.328 11:49:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.586 1+0 records in 00:06:11.586 1+0 records out 00:06:11.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839431 s, 4.9 MB/s 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.586 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.844 1+0 records in 00:06:11.844 1+0 records out 00:06:11.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889067 s, 4.6 MB/s 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.844 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd0", 00:06:12.103 "bdev_name": "Nvme0n1" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd1", 00:06:12.103 "bdev_name": "Nvme1n1" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd2", 00:06:12.103 "bdev_name": "Nvme2n1" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd3", 00:06:12.103 "bdev_name": "Nvme2n2" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd4", 00:06:12.103 "bdev_name": "Nvme2n3" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd5", 00:06:12.103 "bdev_name": "Nvme3n1" 00:06:12.103 } 00:06:12.103 ]' 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd0", 00:06:12.103 "bdev_name": "Nvme0n1" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd1", 00:06:12.103 "bdev_name": "Nvme1n1" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd2", 00:06:12.103 "bdev_name": "Nvme2n1" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd3", 00:06:12.103 "bdev_name": "Nvme2n2" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd4", 00:06:12.103 "bdev_name": "Nvme2n3" 00:06:12.103 }, 00:06:12.103 { 00:06:12.103 "nbd_device": "/dev/nbd5", 00:06:12.103 "bdev_name": "Nvme3n1" 00:06:12.103 } 00:06:12.103 ]' 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.103 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.362 11:49:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.620 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:12.879 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:13.136 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:13.136 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.136 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.136 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.137 11:49:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.394 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:13.650 /dev/nbd0 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.650 1+0 records in 00:06:13.650 1+0 records out 00:06:13.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119903 s, 3.4 MB/s 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.650 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:13.908 /dev/nbd1 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.908 1+0 records in 00:06:13.908 1+0 records out 00:06:13.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790062 s, 5.2 MB/s 00:06:13.908 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.909 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.909 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.909 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.909 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.909 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.909 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.909 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:14.182 /dev/nbd10 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.182 1+0 records in 00:06:14.182 1+0 records out 00:06:14.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544267 s, 7.5 MB/s 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.182 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:14.439 /dev/nbd11 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.439 11:49:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.439 1+0 records in 00:06:14.439 1+0 records out 00:06:14.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345396 s, 11.9 MB/s 00:06:14.439 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.439 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:14.439 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.439 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.439 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:14.439 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.439 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.439 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:14.697 /dev/nbd12 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.697 1+0 records in 00:06:14.697 1+0 records out 00:06:14.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561336 s, 7.3 MB/s 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.697 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:14.955 /dev/nbd13 00:06:14.955 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:14.955 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:14.955 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:06:14.955 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:14.955 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.955 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.955 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:06:14.955 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.956 1+0 records in 00:06:14.956 1+0 records out 00:06:14.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456871 s, 9.0 MB/s 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd0", 00:06:14.956 "bdev_name": "Nvme0n1" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd1", 00:06:14.956 "bdev_name": "Nvme1n1" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd10", 00:06:14.956 "bdev_name": "Nvme2n1" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd11", 00:06:14.956 "bdev_name": "Nvme2n2" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd12", 00:06:14.956 "bdev_name": "Nvme2n3" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd13", 00:06:14.956 "bdev_name": "Nvme3n1" 00:06:14.956 } 00:06:14.956 ]' 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd0", 00:06:14.956 "bdev_name": "Nvme0n1" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd1", 00:06:14.956 "bdev_name": "Nvme1n1" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd10", 00:06:14.956 "bdev_name": "Nvme2n1" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd11", 00:06:14.956 "bdev_name": "Nvme2n2" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd12", 00:06:14.956 "bdev_name": "Nvme2n3" 00:06:14.956 }, 00:06:14.956 { 00:06:14.956 "nbd_device": "/dev/nbd13", 00:06:14.956 "bdev_name": "Nvme3n1" 00:06:14.956 } 00:06:14.956 ]' 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.956 /dev/nbd1 00:06:14.956 /dev/nbd10 00:06:14.956 /dev/nbd11 00:06:14.956 /dev/nbd12 00:06:14.956 /dev/nbd13' 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.956 /dev/nbd1 00:06:14.956 /dev/nbd10 00:06:14.956 /dev/nbd11 00:06:14.956 /dev/nbd12 00:06:14.956 /dev/nbd13' 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:14.956 256+0 records in 00:06:14.956 256+0 records out 00:06:14.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00741944 s, 141 MB/s 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.956 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.214 256+0 records in 00:06:15.214 256+0 records out 00:06:15.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0523506 s, 20.0 MB/s 00:06:15.214 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.214 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.214 256+0 records in 00:06:15.214 256+0 records out 00:06:15.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0781278 s, 13.4 MB/s 00:06:15.214 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.214 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:15.214 256+0 records in 00:06:15.214 256+0 records out 00:06:15.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0639551 s, 16.4 MB/s 00:06:15.214 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.214 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:15.473 256+0 records in 00:06:15.473 256+0 records out 00:06:15.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0754442 s, 13.9 MB/s 00:06:15.473 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:15.473 256+0 records in 00:06:15.473 256+0 records out 00:06:15.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642701 s, 16.3 MB/s 00:06:15.473 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:15.473 256+0 records in 00:06:15.473 256+0 records out 00:06:15.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0636156 s, 16.5 MB/s 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.473 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.731 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.990 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.249 11:49:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.507 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.765 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:17.023 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:17.281 malloc_lvol_verify 00:06:17.281 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:17.281 878b8fe2-c1e3-49a3-b6ab-274108c81418 00:06:17.281 11:49:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:17.539 7e08ec2a-d740-4218-8149-6e39a8ae9357 00:06:17.539 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:17.797 /dev/nbd0 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:17.797 mke2fs 1.47.0 (5-Feb-2023) 00:06:17.797 Discarding device blocks: 0/4096 done 00:06:17.797 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:17.797 00:06:17.797 Allocating group tables: 0/1 done 00:06:17.797 Writing inode tables: 0/1 done 00:06:17.797 Creating journal (1024 blocks): done 00:06:17.797 Writing superblocks and filesystem accounting information: 0/1 done 00:06:17.797 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.797 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59940 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 59940 ']' 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 59940 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59940 00:06:18.054 killing process with pid 59940 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59940' 00:06:18.054 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 59940 00:06:18.055 11:49:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 59940 00:06:18.987 11:49:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:18.987 00:06:18.987 real 0m9.320s 00:06:18.987 user 0m13.353s 00:06:18.987 sys 0m2.881s 00:06:18.987 ************************************ 00:06:18.987 END TEST bdev_nbd 00:06:18.987 ************************************ 00:06:18.987 11:49:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.987 11:49:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:18.987 11:49:16 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:18.987 11:49:16 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:18.987 skipping fio tests on NVMe due to multi-ns failures. 00:06:18.987 11:49:16 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:18.987 11:49:16 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:18.987 11:49:16 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:18.987 11:49:16 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:18.987 11:49:16 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.987 11:49:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:18.987 ************************************ 00:06:18.987 START TEST bdev_verify 00:06:18.987 ************************************ 00:06:18.987 11:49:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:18.987 [2024-11-18 11:49:16.593019] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:18.987 [2024-11-18 11:49:16.593132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60308 ] 00:06:19.245 [2024-11-18 11:49:16.752388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.245 [2024-11-18 11:49:16.851172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.245 [2024-11-18 11:49:16.851181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.810 Running I/O for 5 seconds... 00:06:22.116 22528.00 IOPS, 88.00 MiB/s [2024-11-18T11:49:20.749Z] 23040.00 IOPS, 90.00 MiB/s [2024-11-18T11:49:21.682Z] 23616.00 IOPS, 92.25 MiB/s [2024-11-18T11:49:22.616Z] 23936.00 IOPS, 93.50 MiB/s [2024-11-18T11:49:22.616Z] 24716.80 IOPS, 96.55 MiB/s 00:06:24.915 Latency(us) 00:06:24.915 [2024-11-18T11:49:22.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.915 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x0 length 0xbd0bd 00:06:24.915 Nvme0n1 : 5.05 2027.52 7.92 0.00 0.00 62930.06 12703.90 75820.11 00:06:24.915 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:24.915 Nvme0n1 : 5.03 2061.31 8.05 0.00 0.00 61852.34 12603.08 73803.62 00:06:24.915 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x0 length 0xa0000 00:06:24.915 Nvme1n1 : 5.05 2026.98 7.92 0.00 0.00 62803.79 13510.50 64124.46 00:06:24.915 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0xa0000 length 0xa0000 00:06:24.915 Nvme1n1 : 5.06 2073.04 8.10 0.00 0.00 61450.97 9779.99 62511.26 00:06:24.915 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x0 length 0x80000 00:06:24.915 Nvme2n1 : 5.05 2025.72 7.91 0.00 0.00 62666.89 13409.67 56058.49 00:06:24.915 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x80000 length 0x80000 00:06:24.915 Nvme2n1 : 5.06 2072.47 8.10 0.00 0.00 61367.87 10183.29 60494.77 00:06:24.915 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x0 length 0x80000 00:06:24.915 Nvme2n2 : 5.07 2033.26 7.94 0.00 0.00 62307.80 3302.01 58881.58 00:06:24.915 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x80000 length 0x80000 00:06:24.915 Nvme2n2 : 5.07 2071.18 8.09 0.00 0.00 61245.03 11695.66 58881.58 00:06:24.915 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x0 length 0x80000 00:06:24.915 Nvme2n3 : 5.08 2040.63 7.97 0.00 0.00 62017.33 7007.31 59688.17 00:06:24.915 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x80000 length 0x80000 00:06:24.915 Nvme2n3 : 5.07 2069.80 8.09 0.00 0.00 61152.69 11241.94 62511.26 00:06:24.915 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x0 length 0x20000 00:06:24.915 Nvme3n1 : 5.08 2040.10 7.97 0.00 0.00 61908.30 7057.72 60494.77 00:06:24.915 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:24.915 Verification LBA range: start 0x20000 length 0x20000 00:06:24.915 Nvme3n1 : 5.07 2069.27 8.08 0.00 0.00 61034.36 7410.61 65334.35 00:06:24.915 [2024-11-18T11:49:22.616Z] =================================================================================================================== 00:06:24.915 [2024-11-18T11:49:22.616Z] Total : 24611.29 96.14 0.00 0.00 61888.47 3302.01 75820.11 00:06:26.818 00:06:26.818 real 0m7.610s 00:06:26.818 user 0m14.333s 00:06:26.818 sys 0m0.203s 00:06:26.818 11:49:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.818 11:49:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.818 ************************************ 00:06:26.818 END TEST bdev_verify 00:06:26.818 ************************************ 00:06:26.818 11:49:24 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:26.818 11:49:24 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:26.818 11:49:24 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.818 11:49:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.818 ************************************ 00:06:26.818 START TEST bdev_verify_big_io 00:06:26.818 ************************************ 00:06:26.818 11:49:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:26.818 [2024-11-18 11:49:24.247686] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:26.818 [2024-11-18 11:49:24.247801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60402 ] 00:06:26.818 [2024-11-18 11:49:24.407268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.819 [2024-11-18 11:49:24.502673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.819 [2024-11-18 11:49:24.502843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.751 Running I/O for 5 seconds... 00:06:31.659 620.00 IOPS, 38.75 MiB/s [2024-11-18T11:49:31.260Z] 1693.00 IOPS, 105.81 MiB/s [2024-11-18T11:49:31.518Z] 2348.33 IOPS, 146.77 MiB/s 00:06:33.817 Latency(us) 00:06:33.817 [2024-11-18T11:49:31.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.817 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x0 length 0xbd0b 00:06:33.817 Nvme0n1 : 5.67 115.37 7.21 0.00 0.00 1044281.40 11141.12 1271196.75 00:06:33.817 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:33.817 Nvme0n1 : 5.75 151.91 9.49 0.00 0.00 808270.68 14922.04 980821.86 00:06:33.817 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x0 length 0xa000 00:06:33.817 Nvme1n1 : 5.75 122.39 7.65 0.00 0.00 966499.03 80256.39 1032444.06 00:06:33.817 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0xa000 length 0xa000 00:06:33.817 Nvme1n1 : 5.75 142.32 8.90 0.00 0.00 831206.75 85499.27 1380893.93 00:06:33.817 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x0 length 0x8000 00:06:33.817 Nvme2n1 : 5.94 125.29 7.83 0.00 0.00 902195.46 126635.72 858219.13 00:06:33.817 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x8000 length 0x8000 00:06:33.817 Nvme2n1 : 5.88 151.40 9.46 0.00 0.00 754860.53 92355.35 974369.08 00:06:33.817 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x0 length 0x8000 00:06:33.817 Nvme2n2 : 5.96 132.64 8.29 0.00 0.00 829656.53 15123.69 884030.23 00:06:33.817 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x8000 length 0x8000 00:06:33.817 Nvme2n2 : 5.88 156.29 9.77 0.00 0.00 715673.97 121796.14 722710.84 00:06:33.817 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x0 length 0x8000 00:06:33.817 Nvme2n3 : 5.98 136.37 8.52 0.00 0.00 781777.33 17140.18 1845493.76 00:06:33.817 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x8000 length 0x8000 00:06:33.817 Nvme2n3 : 5.93 169.19 10.57 0.00 0.00 656285.55 4789.17 738842.78 00:06:33.817 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x0 length 0x2000 00:06:33.817 Nvme3n1 : 6.07 187.18 11.70 0.00 0.00 552663.08 158.33 1987454.82 00:06:33.817 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:33.817 Verification LBA range: start 0x2000 length 0x2000 00:06:33.817 Nvme3n1 : 5.93 172.67 10.79 0.00 0.00 626877.02 3856.54 777559.43 00:06:33.817 [2024-11-18T11:49:31.518Z] =================================================================================================================== 00:06:33.817 [2024-11-18T11:49:31.518Z] Total : 1763.02 110.19 0.00 0.00 768579.07 158.33 1987454.82 00:06:35.190 00:06:35.190 real 0m8.609s 00:06:35.190 user 0m16.287s 00:06:35.190 sys 0m0.237s 00:06:35.190 11:49:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.190 ************************************ 00:06:35.190 END TEST bdev_verify_big_io 00:06:35.190 ************************************ 00:06:35.190 11:49:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:35.190 11:49:32 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.190 11:49:32 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:35.190 11:49:32 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.190 11:49:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.190 ************************************ 00:06:35.190 START TEST bdev_write_zeroes 00:06:35.190 ************************************ 00:06:35.190 11:49:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:35.449 [2024-11-18 11:49:32.894658] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:35.449 [2024-11-18 11:49:32.894766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60513 ] 00:06:35.449 [2024-11-18 11:49:33.045168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.449 [2024-11-18 11:49:33.141296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.014 Running I/O for 1 seconds... 00:06:37.384 72960.00 IOPS, 285.00 MiB/s 00:06:37.384 Latency(us) 00:06:37.384 [2024-11-18T11:49:35.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.384 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.384 Nvme0n1 : 1.02 12110.29 47.31 0.00 0.00 10549.21 8570.09 21878.94 00:06:37.384 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.384 Nvme1n1 : 1.02 12095.89 47.25 0.00 0.00 10547.54 8872.57 21778.12 00:06:37.384 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.384 Nvme2n1 : 1.02 12082.06 47.20 0.00 0.00 10537.43 8570.09 21374.82 00:06:37.384 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.384 Nvme2n2 : 1.02 12068.40 47.14 0.00 0.00 10485.63 8620.50 19459.15 00:06:37.385 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.385 Nvme2n3 : 1.02 12054.72 47.09 0.00 0.00 10461.04 8015.56 20265.75 00:06:37.385 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:37.385 Nvme3n1 : 1.03 12041.05 47.04 0.00 0.00 10447.12 5923.45 21576.47 00:06:37.385 [2024-11-18T11:49:35.086Z] =================================================================================================================== 00:06:37.385 [2024-11-18T11:49:35.086Z] Total : 72452.42 283.02 0.00 0.00 10504.66 5923.45 21878.94 00:06:37.950 00:06:37.950 real 0m2.617s 00:06:37.950 user 0m2.333s 00:06:37.950 sys 0m0.171s 00:06:37.950 11:49:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.950 11:49:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:37.950 ************************************ 00:06:37.950 END TEST bdev_write_zeroes 00:06:37.950 ************************************ 00:06:37.950 11:49:35 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.950 11:49:35 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:37.950 11:49:35 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.950 11:49:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.950 ************************************ 00:06:37.950 START TEST bdev_json_nonenclosed 00:06:37.950 ************************************ 00:06:37.950 11:49:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.950 [2024-11-18 11:49:35.555389] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:37.950 [2024-11-18 11:49:35.555490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60566 ] 00:06:38.208 [2024-11-18 11:49:35.715008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.208 [2024-11-18 11:49:35.810810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.208 [2024-11-18 11:49:35.810882] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:38.208 [2024-11-18 11:49:35.810898] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:38.208 [2024-11-18 11:49:35.810907] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.466 00:06:38.466 real 0m0.495s 00:06:38.466 user 0m0.298s 00:06:38.466 sys 0m0.094s 00:06:38.466 11:49:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.466 11:49:35 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:38.466 ************************************ 00:06:38.466 END TEST bdev_json_nonenclosed 00:06:38.466 ************************************ 00:06:38.466 11:49:36 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.466 11:49:36 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:38.466 11:49:36 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.466 11:49:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.466 ************************************ 00:06:38.466 START TEST bdev_json_nonarray 00:06:38.466 ************************************ 00:06:38.466 11:49:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.466 [2024-11-18 11:49:36.090087] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:38.466 [2024-11-18 11:49:36.090197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60586 ] 00:06:38.725 [2024-11-18 11:49:36.249407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.725 [2024-11-18 11:49:36.344718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.725 [2024-11-18 11:49:36.344797] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:38.725 [2024-11-18 11:49:36.344813] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:38.725 [2024-11-18 11:49:36.344822] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.983 00:06:38.983 real 0m0.488s 00:06:38.983 user 0m0.297s 00:06:38.983 sys 0m0.087s 00:06:38.983 11:49:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.983 ************************************ 00:06:38.983 END TEST bdev_json_nonarray 00:06:38.983 11:49:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 ************************************ 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:38.983 11:49:36 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:38.983 00:06:38.983 real 0m35.770s 00:06:38.983 user 0m56.349s 00:06:38.983 sys 0m4.806s 00:06:38.983 ************************************ 00:06:38.983 END TEST blockdev_nvme 00:06:38.983 ************************************ 00:06:38.983 11:49:36 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.983 11:49:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 11:49:36 -- spdk/autotest.sh@209 -- # uname -s 00:06:38.983 11:49:36 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:38.983 11:49:36 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:38.983 11:49:36 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:38.983 11:49:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.983 11:49:36 -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 ************************************ 00:06:38.983 START TEST blockdev_nvme_gpt 00:06:38.983 ************************************ 00:06:38.983 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:38.983 * Looking for test storage... 00:06:38.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:38.983 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.983 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.983 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.241 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.241 11:49:36 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:39.241 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.241 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.241 --rc genhtml_branch_coverage=1 00:06:39.241 --rc genhtml_function_coverage=1 00:06:39.241 --rc genhtml_legend=1 00:06:39.241 --rc geninfo_all_blocks=1 00:06:39.241 --rc geninfo_unexecuted_blocks=1 00:06:39.241 00:06:39.241 ' 00:06:39.241 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.241 --rc genhtml_branch_coverage=1 00:06:39.241 --rc genhtml_function_coverage=1 00:06:39.241 --rc genhtml_legend=1 00:06:39.241 --rc geninfo_all_blocks=1 00:06:39.241 --rc geninfo_unexecuted_blocks=1 00:06:39.241 00:06:39.241 ' 00:06:39.241 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.241 --rc genhtml_branch_coverage=1 00:06:39.241 --rc genhtml_function_coverage=1 00:06:39.241 --rc genhtml_legend=1 00:06:39.241 --rc geninfo_all_blocks=1 00:06:39.241 --rc geninfo_unexecuted_blocks=1 00:06:39.241 00:06:39.241 ' 00:06:39.241 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.241 --rc genhtml_branch_coverage=1 00:06:39.241 --rc genhtml_function_coverage=1 00:06:39.241 --rc genhtml_legend=1 00:06:39.241 --rc geninfo_all_blocks=1 00:06:39.241 --rc geninfo_unexecuted_blocks=1 00:06:39.241 00:06:39.241 ' 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:39.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:39.241 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:39.242 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:39.242 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:39.242 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:39.242 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:39.242 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60670 00:06:39.242 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:39.242 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60670 00:06:39.242 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60670 ']' 00:06:39.242 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.242 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.242 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.242 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.242 11:49:36 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:39.242 11:49:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.242 [2024-11-18 11:49:36.802599] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:39.242 [2024-11-18 11:49:36.802886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60670 ] 00:06:39.499 [2024-11-18 11:49:36.958845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.499 [2024-11-18 11:49:37.054213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.064 11:49:37 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.064 11:49:37 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:06:40.064 11:49:37 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:40.064 11:49:37 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:40.064 11:49:37 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:40.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:40.580 Waiting for block devices as requested 00:06:40.580 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.580 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.580 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.839 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.112 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:46.112 BYT; 00:06:46.112 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:46.112 BYT; 00:06:46.112 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:46.112 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:46.112 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:46.113 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:46.113 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:46.113 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:46.113 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:46.113 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:46.113 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:46.113 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:46.113 11:49:43 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:46.113 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:46.113 11:49:43 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:47.046 The operation has completed successfully. 00:06:47.046 11:49:44 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:47.981 The operation has completed successfully. 00:06:47.981 11:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:48.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.805 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.805 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.805 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.805 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:48.805 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.805 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.805 [] 00:06:48.805 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.805 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:48.805 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:48.805 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:48.805 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:48.805 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:48.805 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.805 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.062 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.062 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:49.062 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.062 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.062 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.062 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:49.062 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:49.062 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.062 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.062 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.321 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.321 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.321 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:49.321 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.321 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:49.321 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.321 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:49.321 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:49.322 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "810ae91c-2957-4756-9911-12458c828b73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "810ae91c-2957-4756-9911-12458c828b73",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "10484eb6-838e-4fba-aced-293d64f7c552"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "10484eb6-838e-4fba-aced-293d64f7c552",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2b194317-6e9d-467e-9b10-56f5d2aa760c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2b194317-6e9d-467e-9b10-56f5d2aa760c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "165c9be9-0b99-4092-9d26-f966434b8da2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "165c9be9-0b99-4092-9d26-f966434b8da2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f83df6a3-4ac4-4f43-aaee-d8dde4c69ed5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f83df6a3-4ac4-4f43-aaee-d8dde4c69ed5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:49.322 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:49.322 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:49.322 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:49.322 11:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60670 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60670 ']' 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60670 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60670 00:06:49.322 killing process with pid 60670 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60670' 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60670 00:06:49.322 11:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60670 00:06:50.694 11:49:48 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:50.694 11:49:48 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:50.694 11:49:48 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:50.694 11:49:48 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.694 11:49:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.694 ************************************ 00:06:50.694 START TEST bdev_hello_world 00:06:50.694 ************************************ 00:06:50.694 11:49:48 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:50.694 [2024-11-18 11:49:48.374607] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:50.694 [2024-11-18 11:49:48.374695] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61287 ] 00:06:50.953 [2024-11-18 11:49:48.522445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.953 [2024-11-18 11:49:48.598348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.518 [2024-11-18 11:49:49.085977] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:51.518 [2024-11-18 11:49:49.086014] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:51.518 [2024-11-18 11:49:49.086030] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:51.518 [2024-11-18 11:49:49.087918] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:51.518 [2024-11-18 11:49:49.088454] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:51.518 [2024-11-18 11:49:49.088479] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:51.518 [2024-11-18 11:49:49.088656] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:51.518 00:06:51.518 [2024-11-18 11:49:49.088670] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:52.150 00:06:52.150 real 0m1.308s 00:06:52.150 user 0m1.067s 00:06:52.150 sys 0m0.138s 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:52.150 ************************************ 00:06:52.150 END TEST bdev_hello_world 00:06:52.150 ************************************ 00:06:52.150 11:49:49 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:52.150 11:49:49 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:52.150 11:49:49 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.150 11:49:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.150 ************************************ 00:06:52.150 START TEST bdev_bounds 00:06:52.150 ************************************ 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61324 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61324' 00:06:52.150 Process bdevio pid: 61324 00:06:52.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61324 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61324 ']' 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.150 11:49:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:52.150 [2024-11-18 11:49:49.724935] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:52.150 [2024-11-18 11:49:49.725054] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61324 ] 00:06:52.408 [2024-11-18 11:49:49.873820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.408 [2024-11-18 11:49:49.951157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.408 [2024-11-18 11:49:49.951413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.408 [2024-11-18 11:49:49.951414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.974 11:49:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.974 11:49:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:52.974 11:49:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:52.974 I/O targets: 00:06:52.974 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:52.974 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:52.974 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:52.974 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.974 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.974 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.974 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:52.974 00:06:52.974 00:06:52.974 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.974 http://cunit.sourceforge.net/ 00:06:52.974 00:06:52.974 00:06:52.974 Suite: bdevio tests on: Nvme3n1 00:06:52.974 Test: blockdev write read block ...passed 00:06:52.974 Test: blockdev write zeroes read block ...passed 00:06:52.974 Test: blockdev write zeroes read no split ...passed 00:06:52.974 Test: blockdev write zeroes read split ...passed 00:06:52.974 Test: blockdev write zeroes read split partial ...passed 00:06:52.974 Test: blockdev reset ...[2024-11-18 11:49:50.654220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:52.974 [2024-11-18 11:49:50.659097] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:52.974 passed 00:06:52.974 Test: blockdev write read 8 blocks ...passed 00:06:52.974 Test: blockdev write read size > 128k ...passed 00:06:52.974 Test: blockdev write read invalid size ...passed 00:06:52.974 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.974 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.974 Test: blockdev write read max offset ...passed 00:06:52.974 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.974 Test: blockdev writev readv 8 blocks ...passed 00:06:52.974 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.974 Test: blockdev writev readv block ...passed 00:06:52.974 Test: blockdev writev readv size > 128k ...passed 00:06:52.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.974 Test: blockdev comparev and writev ...[2024-11-18 11:49:50.666770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7404000 len:0x1000 00:06:52.974 [2024-11-18 11:49:50.666911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.974 passed 00:06:52.974 Test: blockdev nvme passthru rw ...passed 00:06:52.974 Test: blockdev nvme passthru vendor specific ...passed 00:06:52.974 Test: blockdev nvme admin passthru ...[2024-11-18 11:49:50.668122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.974 [2024-11-18 11:49:50.668157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.236 passed 00:06:53.236 Test: blockdev copy ...passed 00:06:53.236 Suite: bdevio tests on: Nvme2n3 00:06:53.236 Test: blockdev write read block ...passed 00:06:53.236 Test: blockdev write zeroes read block ...passed 00:06:53.236 Test: blockdev write zeroes read no split ...passed 00:06:53.236 Test: blockdev write zeroes read split ...passed 00:06:53.236 Test: blockdev write zeroes read split partial ...passed 00:06:53.236 Test: blockdev reset ...[2024-11-18 11:49:50.720423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.236 [2024-11-18 11:49:50.723468] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:53.236 passed 00:06:53.236 Test: blockdev write read 8 blocks ...passed 00:06:53.236 Test: blockdev write read size > 128k ...passed 00:06:53.236 Test: blockdev write read invalid size ...passed 00:06:53.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.236 Test: blockdev write read max offset ...passed 00:06:53.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.236 Test: blockdev writev readv 8 blocks ...passed 00:06:53.236 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.236 Test: blockdev writev readv block ...passed 00:06:53.236 Test: blockdev writev readv size > 128k ...passed 00:06:53.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.236 Test: blockdev comparev and writev ...[2024-11-18 11:49:50.729988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:53.236 Test: blockdev nvme passthru rw ...passed 00:06:53.236 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.236 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2b7402000 len:0x1000 00:06:53.236 [2024-11-18 11:49:50.730095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.236 [2024-11-18 11:49:50.730571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.236 [2024-11-18 11:49:50.730603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.236 passed 00:06:53.236 Test: blockdev copy ...passed 00:06:53.236 Suite: bdevio tests on: Nvme2n2 00:06:53.236 Test: blockdev write read block ...passed 00:06:53.236 Test: blockdev write zeroes read block ...passed 00:06:53.236 Test: blockdev write zeroes read no split ...passed 00:06:53.236 Test: blockdev write zeroes read split ...passed 00:06:53.236 Test: blockdev write zeroes read split partial ...passed 00:06:53.236 Test: blockdev reset ...[2024-11-18 11:49:50.772393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.236 [2024-11-18 11:49:50.775225] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:53.236 passed 00:06:53.236 Test: blockdev write read 8 blocks ...passed 00:06:53.236 Test: blockdev write read size > 128k ...passed 00:06:53.236 Test: blockdev write read invalid size ...passed 00:06:53.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.236 Test: blockdev write read max offset ...passed 00:06:53.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.236 Test: blockdev writev readv 8 blocks ...passed 00:06:53.236 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.236 Test: blockdev writev readv block ...passed 00:06:53.236 Test: blockdev writev readv size > 128k ...passed 00:06:53.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.236 Test: blockdev comparev and writev ...[2024-11-18 11:49:50.781703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca638000 len:0x1000 00:06:53.236 [2024-11-18 11:49:50.781740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.236 passed 00:06:53.236 Test: blockdev nvme passthru rw ...passed 00:06:53.236 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.236 Test: blockdev nvme admin passthru ...[2024-11-18 11:49:50.782233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.236 [2024-11-18 11:49:50.782255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.236 passed 00:06:53.236 Test: blockdev copy ...passed 00:06:53.236 Suite: bdevio tests on: Nvme2n1 00:06:53.236 Test: blockdev write read block ...passed 00:06:53.236 Test: blockdev write zeroes read block ...passed 00:06:53.236 Test: blockdev write zeroes read no split ...passed 00:06:53.236 Test: blockdev write zeroes read split ...passed 00:06:53.236 Test: blockdev write zeroes read split partial ...passed 00:06:53.236 Test: blockdev reset ...[2024-11-18 11:49:50.823853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.236 [2024-11-18 11:49:50.826692] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:53.236 passed 00:06:53.236 Test: blockdev write read 8 blocks ...passed 00:06:53.236 Test: blockdev write read size > 128k ...passed 00:06:53.236 Test: blockdev write read invalid size ...passed 00:06:53.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.236 Test: blockdev write read max offset ...passed 00:06:53.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.236 Test: blockdev writev readv 8 blocks ...passed 00:06:53.236 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.236 Test: blockdev writev readv block ...passed 00:06:53.236 Test: blockdev writev readv size > 128k ...passed 00:06:53.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.236 Test: blockdev comparev and writev ...[2024-11-18 11:49:50.832882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca634000 len:0x1000 00:06:53.236 [2024-11-18 11:49:50.832919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.236 passed 00:06:53.236 Test: blockdev nvme passthru rw ...passed 00:06:53.236 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.236 Test: blockdev nvme admin passthru ...[2024-11-18 11:49:50.833338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.236 [2024-11-18 11:49:50.833361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.236 passed 00:06:53.236 Test: blockdev copy ...passed 00:06:53.236 Suite: bdevio tests on: Nvme1n1p2 00:06:53.236 Test: blockdev write read block ...passed 00:06:53.236 Test: blockdev write zeroes read block ...passed 00:06:53.236 Test: blockdev write zeroes read no split ...passed 00:06:53.236 Test: blockdev write zeroes read split ...passed 00:06:53.236 Test: blockdev write zeroes read split partial ...passed 00:06:53.236 Test: blockdev reset ...[2024-11-18 11:49:50.874980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:53.236 [2024-11-18 11:49:50.877513] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:53.236 passed 00:06:53.236 Test: blockdev write read 8 blocks ...passed 00:06:53.236 Test: blockdev write read size > 128k ...passed 00:06:53.236 Test: blockdev write read invalid size ...passed 00:06:53.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.236 Test: blockdev write read max offset ...passed 00:06:53.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.236 Test: blockdev writev readv 8 blocks ...passed 00:06:53.236 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.236 Test: blockdev writev readv block ...passed 00:06:53.236 Test: blockdev writev readv size > 128k ...passed 00:06:53.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.236 Test: blockdev comparev and writev ...[2024-11-18 11:49:50.883908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ca630000 len:0x1000 00:06:53.236 [2024-11-18 11:49:50.883945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.236 passed 00:06:53.236 Test: blockdev nvme passthru rw ...passed 00:06:53.236 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.236 Test: blockdev nvme admin passthru ...passed 00:06:53.236 Test: blockdev copy ...passed 00:06:53.236 Suite: bdevio tests on: Nvme1n1p1 00:06:53.236 Test: blockdev write read block ...passed 00:06:53.236 Test: blockdev write zeroes read block ...passed 00:06:53.236 Test: blockdev write zeroes read no split ...passed 00:06:53.236 Test: blockdev write zeroes read split ...passed 00:06:53.236 Test: blockdev write zeroes read split partial ...passed 00:06:53.236 Test: blockdev reset ...[2024-11-18 11:49:50.924633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:53.236 passed 00:06:53.236 Test: blockdev write read 8 blocks ...[2024-11-18 11:49:50.927864] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:53.236 passed 00:06:53.237 Test: blockdev write read size > 128k ...passed 00:06:53.237 Test: blockdev write read invalid size ...passed 00:06:53.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.237 Test: blockdev write read max offset ...passed 00:06:53.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.498 Test: blockdev writev readv 8 blocks ...passed 00:06:53.498 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.498 Test: blockdev writev readv block ...passed 00:06:53.498 Test: blockdev writev readv size > 128k ...passed 00:06:53.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.498 Test: blockdev comparev and writev ...[2024-11-18 11:49:50.948597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b7e0e000 len:0x1000 00:06:53.498 [2024-11-18 11:49:50.948631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.498 passed 00:06:53.498 Test: blockdev nvme passthru rw ...passed 00:06:53.498 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.498 Test: blockdev nvme admin passthru ...passed 00:06:53.498 Test: blockdev copy ...passed 00:06:53.498 Suite: bdevio tests on: Nvme0n1 00:06:53.498 Test: blockdev write read block ...passed 00:06:53.498 Test: blockdev write zeroes read block ...passed 00:06:53.498 Test: blockdev write zeroes read no split ...passed 00:06:53.498 Test: blockdev write zeroes read split ...passed 00:06:53.498 Test: blockdev write zeroes read split partial ...passed 00:06:53.498 Test: blockdev reset ...[2024-11-18 11:49:51.000469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:53.498 passed 00:06:53.498 Test: blockdev write read 8 blocks ...[2024-11-18 11:49:51.004473] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:53.498 passed 00:06:53.498 Test: blockdev write read size > 128k ...passed 00:06:53.498 Test: blockdev write read invalid size ...passed 00:06:53.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.498 Test: blockdev write read max offset ...passed 00:06:53.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.498 Test: blockdev writev readv 8 blocks ...passed 00:06:53.498 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.498 Test: blockdev writev readv block ...passed 00:06:53.498 Test: blockdev writev readv size > 128k ...passed 00:06:53.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.498 Test: blockdev comparev and writev ...passed 00:06:53.498 Test: blockdev nvme passthru rw ...[2024-11-18 11:49:51.021810] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:53.498 separate metadata which is not supported yet. 00:06:53.498 passed 00:06:53.498 Test: blockdev nvme passthru vendor specific ...[2024-11-18 11:49:51.023155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:53.498 [2024-11-18 11:49:51.023271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:53.498 passed 00:06:53.498 Test: blockdev nvme admin passthru ...passed 00:06:53.498 Test: blockdev copy ...passed 00:06:53.498 00:06:53.498 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.498 suites 7 7 n/a 0 0 00:06:53.498 tests 161 161 161 0 0 00:06:53.498 asserts 1025 1025 1025 0 n/a 00:06:53.498 00:06:53.498 Elapsed time = 1.111 seconds 00:06:53.498 0 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61324 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61324 ']' 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61324 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61324 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.498 killing process with pid 61324 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61324' 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61324 00:06:53.498 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61324 00:06:54.071 11:49:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:54.071 00:06:54.071 real 0m2.070s 00:06:54.071 user 0m5.304s 00:06:54.071 sys 0m0.254s 00:06:54.071 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.071 11:49:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:54.071 ************************************ 00:06:54.071 END TEST bdev_bounds 00:06:54.071 ************************************ 00:06:54.329 11:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:54.329 11:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:54.329 11:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.329 11:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.329 ************************************ 00:06:54.329 START TEST bdev_nbd 00:06:54.329 ************************************ 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61378 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:54.329 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61378 /var/tmp/spdk-nbd.sock 00:06:54.330 11:49:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61378 ']' 00:06:54.330 11:49:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.330 11:49:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:54.330 11:49:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.330 11:49:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.330 11:49:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.330 11:49:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:54.330 [2024-11-18 11:49:51.870856] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:06:54.330 [2024-11-18 11:49:51.870966] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.589 [2024-11-18 11:49:52.030763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.589 [2024-11-18 11:49:52.127764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.160 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.161 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:55.161 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:55.161 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:55.161 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:55.161 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.161 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.422 1+0 records in 00:06:55.422 1+0 records out 00:06:55.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400303 s, 10.2 MB/s 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.422 11:49:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.683 1+0 records in 00:06:55.683 1+0 records out 00:06:55.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114805 s, 3.6 MB/s 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:55.683 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.944 1+0 records in 00:06:55.944 1+0 records out 00:06:55.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378756 s, 10.8 MB/s 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.944 1+0 records in 00:06:55.944 1+0 records out 00:06:55.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487401 s, 8.4 MB/s 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.944 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.205 1+0 records in 00:06:56.205 1+0 records out 00:06:56.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000849629 s, 4.8 MB/s 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.205 11:49:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.465 1+0 records in 00:06:56.465 1+0 records out 00:06:56.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132223 s, 3.1 MB/s 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.465 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.727 1+0 records in 00:06:56.727 1+0 records out 00:06:56.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083944 s, 4.9 MB/s 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.727 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd0", 00:06:56.988 "bdev_name": "Nvme0n1" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd1", 00:06:56.988 "bdev_name": "Nvme1n1p1" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd2", 00:06:56.988 "bdev_name": "Nvme1n1p2" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd3", 00:06:56.988 "bdev_name": "Nvme2n1" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd4", 00:06:56.988 "bdev_name": "Nvme2n2" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd5", 00:06:56.988 "bdev_name": "Nvme2n3" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd6", 00:06:56.988 "bdev_name": "Nvme3n1" 00:06:56.988 } 00:06:56.988 ]' 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd0", 00:06:56.988 "bdev_name": "Nvme0n1" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd1", 00:06:56.988 "bdev_name": "Nvme1n1p1" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd2", 00:06:56.988 "bdev_name": "Nvme1n1p2" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd3", 00:06:56.988 "bdev_name": "Nvme2n1" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd4", 00:06:56.988 "bdev_name": "Nvme2n2" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd5", 00:06:56.988 "bdev_name": "Nvme2n3" 00:06:56.988 }, 00:06:56.988 { 00:06:56.988 "nbd_device": "/dev/nbd6", 00:06:56.988 "bdev_name": "Nvme3n1" 00:06:56.988 } 00:06:56.988 ]' 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.988 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.249 11:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.511 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.773 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.034 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.295 11:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.556 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.817 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:58.818 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.818 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.818 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:59.078 /dev/nbd0 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:59.078 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.078 1+0 records in 00:06:59.078 1+0 records out 00:06:59.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806339 s, 5.1 MB/s 00:06:59.079 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.079 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:59.079 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.079 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:59.079 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:59.079 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.079 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.079 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:59.339 /dev/nbd1 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:59.339 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.340 1+0 records in 00:06:59.340 1+0 records out 00:06:59.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000918722 s, 4.5 MB/s 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.340 11:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:59.340 /dev/nbd10 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:59.340 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.600 1+0 records in 00:06:59.600 1+0 records out 00:06:59.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101256 s, 4.0 MB/s 00:06:59.600 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.600 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:59.600 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.600 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:59.600 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:59.600 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.600 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.600 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:59.600 /dev/nbd11 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.601 1+0 records in 00:06:59.601 1+0 records out 00:06:59.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116748 s, 3.5 MB/s 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.601 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:59.860 /dev/nbd12 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.860 1+0 records in 00:06:59.860 1+0 records out 00:06:59.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113776 s, 3.6 MB/s 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.860 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:00.120 /dev/nbd13 00:07:00.120 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:00.120 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:00.120 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:07:00.120 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.121 1+0 records in 00:07:00.121 1+0 records out 00:07:00.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116648 s, 3.5 MB/s 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.121 11:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:00.380 /dev/nbd14 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.380 1+0 records in 00:07:00.380 1+0 records out 00:07:00.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906097 s, 4.5 MB/s 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.380 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd0", 00:07:00.641 "bdev_name": "Nvme0n1" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd1", 00:07:00.641 "bdev_name": "Nvme1n1p1" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd10", 00:07:00.641 "bdev_name": "Nvme1n1p2" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd11", 00:07:00.641 "bdev_name": "Nvme2n1" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd12", 00:07:00.641 "bdev_name": "Nvme2n2" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd13", 00:07:00.641 "bdev_name": "Nvme2n3" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd14", 00:07:00.641 "bdev_name": "Nvme3n1" 00:07:00.641 } 00:07:00.641 ]' 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd0", 00:07:00.641 "bdev_name": "Nvme0n1" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd1", 00:07:00.641 "bdev_name": "Nvme1n1p1" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd10", 00:07:00.641 "bdev_name": "Nvme1n1p2" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd11", 00:07:00.641 "bdev_name": "Nvme2n1" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd12", 00:07:00.641 "bdev_name": "Nvme2n2" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd13", 00:07:00.641 "bdev_name": "Nvme2n3" 00:07:00.641 }, 00:07:00.641 { 00:07:00.641 "nbd_device": "/dev/nbd14", 00:07:00.641 "bdev_name": "Nvme3n1" 00:07:00.641 } 00:07:00.641 ]' 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.641 /dev/nbd1 00:07:00.641 /dev/nbd10 00:07:00.641 /dev/nbd11 00:07:00.641 /dev/nbd12 00:07:00.641 /dev/nbd13 00:07:00.641 /dev/nbd14' 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.641 /dev/nbd1 00:07:00.641 /dev/nbd10 00:07:00.641 /dev/nbd11 00:07:00.641 /dev/nbd12 00:07:00.641 /dev/nbd13 00:07:00.641 /dev/nbd14' 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:00.641 256+0 records in 00:07:00.641 256+0 records out 00:07:00.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00789306 s, 133 MB/s 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.641 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.903 256+0 records in 00:07:00.903 256+0 records out 00:07:00.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154545 s, 6.8 MB/s 00:07:00.903 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.903 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.163 256+0 records in 00:07:01.163 256+0 records out 00:07:01.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188342 s, 5.6 MB/s 00:07:01.163 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.163 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:01.163 256+0 records in 00:07:01.163 256+0 records out 00:07:01.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174384 s, 6.0 MB/s 00:07:01.163 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.163 11:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:01.423 256+0 records in 00:07:01.423 256+0 records out 00:07:01.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197458 s, 5.3 MB/s 00:07:01.423 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.423 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:01.691 256+0 records in 00:07:01.691 256+0 records out 00:07:01.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223579 s, 4.7 MB/s 00:07:01.691 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.691 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:01.954 256+0 records in 00:07:01.954 256+0 records out 00:07:01.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193935 s, 5.4 MB/s 00:07:01.954 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.954 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:02.214 256+0 records in 00:07:02.214 256+0 records out 00:07:02.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236468 s, 4.4 MB/s 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:02.214 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.215 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.474 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.474 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.475 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.475 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.475 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.475 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.475 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.475 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.475 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.475 11:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.735 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.736 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.997 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.258 11:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.520 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.781 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:04.041 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:04.300 malloc_lvol_verify 00:07:04.300 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:04.300 534623e0-9ad6-4a3d-a521-35ec15eb0b98 00:07:04.300 11:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:04.558 6b60383f-4b05-44f0-834d-869b3b641d90 00:07:04.558 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:04.816 /dev/nbd0 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:04.816 mke2fs 1.47.0 (5-Feb-2023) 00:07:04.816 Discarding device blocks: 0/4096 done 00:07:04.816 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:04.816 00:07:04.816 Allocating group tables: 0/1 done 00:07:04.816 Writing inode tables: 0/1 done 00:07:04.816 Creating journal (1024 blocks): done 00:07:04.816 Writing superblocks and filesystem accounting information: 0/1 done 00:07:04.816 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.816 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61378 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61378 ']' 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61378 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61378 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.074 killing process with pid 61378 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61378' 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61378 00:07:05.074 11:50:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61378 00:07:05.640 11:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:05.640 00:07:05.640 real 0m11.421s 00:07:05.640 user 0m15.750s 00:07:05.640 sys 0m3.752s 00:07:05.640 11:50:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.640 11:50:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:05.640 ************************************ 00:07:05.640 END TEST bdev_nbd 00:07:05.640 ************************************ 00:07:05.640 11:50:03 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:05.641 11:50:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:05.641 11:50:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:05.641 skipping fio tests on NVMe due to multi-ns failures. 00:07:05.641 11:50:03 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:05.641 11:50:03 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:05.641 11:50:03 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.641 11:50:03 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:05.641 11:50:03 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.641 11:50:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.641 ************************************ 00:07:05.641 START TEST bdev_verify 00:07:05.641 ************************************ 00:07:05.641 11:50:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.641 [2024-11-18 11:50:03.335946] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:05.641 [2024-11-18 11:50:03.336033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61800 ] 00:07:05.898 [2024-11-18 11:50:03.483044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.898 [2024-11-18 11:50:03.560955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.898 [2024-11-18 11:50:03.561131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.463 Running I/O for 5 seconds... 00:07:08.771 27264.00 IOPS, 106.50 MiB/s [2024-11-18T11:50:07.409Z] 26976.00 IOPS, 105.38 MiB/s [2024-11-18T11:50:08.352Z] 25536.00 IOPS, 99.75 MiB/s [2024-11-18T11:50:09.289Z] 24096.00 IOPS, 94.12 MiB/s [2024-11-18T11:50:09.289Z] 23833.60 IOPS, 93.10 MiB/s 00:07:11.588 Latency(us) 00:07:11.588 [2024-11-18T11:50:09.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.588 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.588 Verification LBA range: start 0x0 length 0xbd0bd 00:07:11.588 Nvme0n1 : 5.05 1735.87 6.78 0.00 0.00 73399.69 5898.24 80659.69 00:07:11.588 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.588 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:11.588 Nvme0n1 : 5.05 1622.57 6.34 0.00 0.00 78549.20 13208.02 75416.81 00:07:11.588 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.588 Verification LBA range: start 0x0 length 0x4ff80 00:07:11.589 Nvme1n1p1 : 5.06 1743.86 6.81 0.00 0.00 73171.52 9427.10 77836.60 00:07:11.589 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:11.589 Nvme1n1p1 : 5.05 1622.06 6.34 0.00 0.00 78400.57 15426.17 73803.62 00:07:11.589 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x0 length 0x4ff7f 00:07:11.589 Nvme1n1p2 : 5.07 1742.97 6.81 0.00 0.00 73094.00 10485.76 76223.41 00:07:11.589 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:11.589 Nvme1n1p2 : 5.07 1627.72 6.36 0.00 0.00 78052.85 6200.71 71787.13 00:07:11.589 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x0 length 0x80000 00:07:11.589 Nvme2n1 : 5.07 1742.56 6.81 0.00 0.00 72975.20 10838.65 70980.53 00:07:11.589 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x80000 length 0x80000 00:07:11.589 Nvme2n1 : 5.08 1636.63 6.39 0.00 0.00 77609.23 8418.86 68157.44 00:07:11.589 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x0 length 0x80000 00:07:11.589 Nvme2n2 : 5.07 1742.17 6.81 0.00 0.00 72874.99 10637.00 75013.51 00:07:11.589 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x80000 length 0x80000 00:07:11.589 Nvme2n2 : 5.08 1636.20 6.39 0.00 0.00 77460.04 8771.74 70577.23 00:07:11.589 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x0 length 0x80000 00:07:11.589 Nvme2n3 : 5.07 1741.80 6.80 0.00 0.00 72777.77 10939.47 79449.80 00:07:11.589 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x80000 length 0x80000 00:07:11.589 Nvme2n3 : 5.09 1635.77 6.39 0.00 0.00 77334.26 9074.22 72190.42 00:07:11.589 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x0 length 0x20000 00:07:11.589 Nvme3n1 : 5.07 1741.43 6.80 0.00 0.00 72670.22 6704.84 81869.59 00:07:11.589 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.589 Verification LBA range: start 0x20000 length 0x20000 00:07:11.589 Nvme3n1 : 5.09 1635.33 6.39 0.00 0.00 77273.38 8721.33 75416.81 00:07:11.589 [2024-11-18T11:50:09.290Z] =================================================================================================================== 00:07:11.589 [2024-11-18T11:50:09.290Z] Total : 23606.95 92.21 0.00 0.00 75324.03 5898.24 81869.59 00:07:12.529 00:07:12.529 real 0m6.832s 00:07:12.529 user 0m12.581s 00:07:12.529 sys 0m0.182s 00:07:12.530 11:50:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.530 11:50:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:12.530 ************************************ 00:07:12.530 END TEST bdev_verify 00:07:12.530 ************************************ 00:07:12.530 11:50:10 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:12.530 11:50:10 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:12.530 11:50:10 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.530 11:50:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.530 ************************************ 00:07:12.530 START TEST bdev_verify_big_io 00:07:12.530 ************************************ 00:07:12.530 11:50:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:12.791 [2024-11-18 11:50:10.242667] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:12.791 [2024-11-18 11:50:10.242783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61899 ] 00:07:12.791 [2024-11-18 11:50:10.402828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.052 [2024-11-18 11:50:10.499488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.052 [2024-11-18 11:50:10.499574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.622 Running I/O for 5 seconds... 00:07:19.189 2028.00 IOPS, 126.75 MiB/s [2024-11-18T11:50:17.455Z] 3677.50 IOPS, 229.84 MiB/s [2024-11-18T11:50:17.455Z] 3475.00 IOPS, 217.19 MiB/s 00:07:19.754 Latency(us) 00:07:19.754 [2024-11-18T11:50:17.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.754 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x0 length 0xbd0b 00:07:19.754 Nvme0n1 : 5.63 113.60 7.10 0.00 0.00 1077642.95 32465.53 1400252.26 00:07:19.754 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:19.754 Nvme0n1 : 5.73 100.59 6.29 0.00 0.00 1206481.53 20769.87 1561571.64 00:07:19.754 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x0 length 0x4ff8 00:07:19.754 Nvme1n1p1 : 5.77 114.90 7.18 0.00 0.00 1021939.44 127442.31 1187310.67 00:07:19.754 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:19.754 Nvme1n1p1 : 5.73 139.31 8.71 0.00 0.00 859758.53 101631.21 1006632.96 00:07:19.754 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x0 length 0x4ff7 00:07:19.754 Nvme1n1p2 : 5.96 118.72 7.42 0.00 0.00 951184.84 101227.91 1064707.94 00:07:19.754 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:19.754 Nvme1n1p2 : 5.73 138.40 8.65 0.00 0.00 836448.64 133895.09 845313.58 00:07:19.754 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x0 length 0x8000 00:07:19.754 Nvme2n1 : 6.05 120.63 7.54 0.00 0.00 916259.85 68964.04 1690627.15 00:07:19.754 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x8000 length 0x8000 00:07:19.754 Nvme2n1 : 5.95 145.60 9.10 0.00 0.00 773068.03 65334.35 877577.45 00:07:19.754 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x0 length 0x8000 00:07:19.754 Nvme2n2 : 6.05 122.91 7.68 0.00 0.00 866983.35 21374.82 1948738.17 00:07:19.754 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x8000 length 0x8000 00:07:19.754 Nvme2n2 : 5.96 150.39 9.40 0.00 0.00 734803.50 88725.66 896935.78 00:07:19.754 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x0 length 0x8000 00:07:19.754 Nvme2n3 : 6.14 148.73 9.30 0.00 0.00 695227.85 9275.86 1974549.27 00:07:19.754 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x8000 length 0x8000 00:07:19.754 Nvme2n3 : 6.04 158.96 9.93 0.00 0.00 677923.08 27021.00 909841.33 00:07:19.754 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x0 length 0x2000 00:07:19.754 Nvme3n1 : 6.21 209.29 13.08 0.00 0.00 483175.37 316.65 1542213.32 00:07:19.754 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.754 Verification LBA range: start 0x2000 length 0x2000 00:07:19.754 Nvme3n1 : 6.05 169.36 10.59 0.00 0.00 618901.57 1241.40 929199.66 00:07:19.754 [2024-11-18T11:50:17.455Z] =================================================================================================================== 00:07:19.754 [2024-11-18T11:50:17.455Z] Total : 1951.37 121.96 0.00 0.00 798727.16 316.65 1974549.27 00:07:21.688 00:07:21.688 real 0m8.752s 00:07:21.688 user 0m16.136s 00:07:21.688 sys 0m0.229s 00:07:21.688 11:50:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.688 ************************************ 00:07:21.688 END TEST bdev_verify_big_io 00:07:21.688 ************************************ 00:07:21.688 11:50:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:21.688 11:50:18 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.688 11:50:18 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:21.688 11:50:18 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.688 11:50:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.688 ************************************ 00:07:21.688 START TEST bdev_write_zeroes 00:07:21.688 ************************************ 00:07:21.688 11:50:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.688 [2024-11-18 11:50:19.046012] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:21.688 [2024-11-18 11:50:19.046125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62009 ] 00:07:21.688 [2024-11-18 11:50:19.206705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.688 [2024-11-18 11:50:19.306308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.258 Running I/O for 1 seconds... 00:07:23.458 64960.00 IOPS, 253.75 MiB/s 00:07:23.458 Latency(us) 00:07:23.458 [2024-11-18T11:50:21.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.458 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.458 Nvme0n1 : 1.02 9260.14 36.17 0.00 0.00 13786.97 6956.90 25206.15 00:07:23.458 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.458 Nvme1n1p1 : 1.02 9248.42 36.13 0.00 0.00 13790.61 10889.06 26012.75 00:07:23.458 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.458 Nvme1n1p2 : 1.03 9237.09 36.08 0.00 0.00 13768.63 10737.82 24702.03 00:07:23.458 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.458 Nvme2n1 : 1.03 9226.67 36.04 0.00 0.00 13758.64 10989.88 23996.26 00:07:23.458 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.458 Nvme2n2 : 1.03 9216.23 36.00 0.00 0.00 13754.99 10838.65 23492.14 00:07:23.458 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.458 Nvme2n3 : 1.03 9205.88 35.96 0.00 0.00 13725.13 8922.98 23592.96 00:07:23.458 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.458 Nvme3n1 : 1.03 9195.49 35.92 0.00 0.00 13700.36 7108.14 25306.98 00:07:23.458 [2024-11-18T11:50:21.159Z] =================================================================================================================== 00:07:23.458 [2024-11-18T11:50:21.159Z] Total : 64589.92 252.30 0.00 0.00 13755.05 6956.90 26012.75 00:07:24.031 00:07:24.031 real 0m2.672s 00:07:24.031 user 0m2.383s 00:07:24.031 sys 0m0.175s 00:07:24.031 11:50:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.031 ************************************ 00:07:24.031 END TEST bdev_write_zeroes 00:07:24.031 11:50:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:24.031 ************************************ 00:07:24.031 11:50:21 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.031 11:50:21 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:24.031 11:50:21 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.031 11:50:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.031 ************************************ 00:07:24.031 START TEST bdev_json_nonenclosed 00:07:24.031 ************************************ 00:07:24.031 11:50:21 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.292 [2024-11-18 11:50:21.785533] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:24.292 [2024-11-18 11:50:21.785659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62062 ] 00:07:24.292 [2024-11-18 11:50:21.945928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.554 [2024-11-18 11:50:22.041436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.554 [2024-11-18 11:50:22.041505] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:24.554 [2024-11-18 11:50:22.041520] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:24.554 [2024-11-18 11:50:22.041529] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.554 00:07:24.554 real 0m0.494s 00:07:24.554 user 0m0.308s 00:07:24.554 sys 0m0.081s 00:07:24.554 11:50:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.554 11:50:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:24.554 ************************************ 00:07:24.554 END TEST bdev_json_nonenclosed 00:07:24.554 ************************************ 00:07:24.816 11:50:22 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.817 11:50:22 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:24.817 11:50:22 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.817 11:50:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.817 ************************************ 00:07:24.817 START TEST bdev_json_nonarray 00:07:24.817 ************************************ 00:07:24.817 11:50:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.817 [2024-11-18 11:50:22.346000] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:24.817 [2024-11-18 11:50:22.346108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62082 ] 00:07:24.817 [2024-11-18 11:50:22.506240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.078 [2024-11-18 11:50:22.602202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.078 [2024-11-18 11:50:22.602280] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:25.078 [2024-11-18 11:50:22.602297] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.078 [2024-11-18 11:50:22.602305] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.341 00:07:25.341 real 0m0.499s 00:07:25.341 user 0m0.299s 00:07:25.341 sys 0m0.094s 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.341 ************************************ 00:07:25.341 END TEST bdev_json_nonarray 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:25.341 ************************************ 00:07:25.341 11:50:22 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:25.341 11:50:22 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:25.341 11:50:22 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:25.341 11:50:22 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.341 11:50:22 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.341 11:50:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.341 ************************************ 00:07:25.341 START TEST bdev_gpt_uuid 00:07:25.341 ************************************ 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62113 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62113 00:07:25.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62113 ']' 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:25.341 11:50:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.341 [2024-11-18 11:50:22.928954] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:25.341 [2024-11-18 11:50:22.929072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62113 ] 00:07:25.604 [2024-11-18 11:50:23.082670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.604 [2024-11-18 11:50:23.179844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.178 11:50:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:26.178 11:50:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:07:26.178 11:50:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.178 11:50:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.178 11:50:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 Some configs were skipped because the RPC state that can call them passed over. 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:26.450 { 00:07:26.450 "name": "Nvme1n1p1", 00:07:26.450 "aliases": [ 00:07:26.450 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:26.450 ], 00:07:26.450 "product_name": "GPT Disk", 00:07:26.450 "block_size": 4096, 00:07:26.450 "num_blocks": 655104, 00:07:26.450 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:26.450 "assigned_rate_limits": { 00:07:26.450 "rw_ios_per_sec": 0, 00:07:26.450 "rw_mbytes_per_sec": 0, 00:07:26.450 "r_mbytes_per_sec": 0, 00:07:26.450 "w_mbytes_per_sec": 0 00:07:26.450 }, 00:07:26.450 "claimed": false, 00:07:26.450 "zoned": false, 00:07:26.450 "supported_io_types": { 00:07:26.450 "read": true, 00:07:26.450 "write": true, 00:07:26.450 "unmap": true, 00:07:26.450 "flush": true, 00:07:26.450 "reset": true, 00:07:26.450 "nvme_admin": false, 00:07:26.450 "nvme_io": false, 00:07:26.450 "nvme_io_md": false, 00:07:26.450 "write_zeroes": true, 00:07:26.450 "zcopy": false, 00:07:26.450 "get_zone_info": false, 00:07:26.450 "zone_management": false, 00:07:26.450 "zone_append": false, 00:07:26.450 "compare": true, 00:07:26.450 "compare_and_write": false, 00:07:26.450 "abort": true, 00:07:26.450 "seek_hole": false, 00:07:26.450 "seek_data": false, 00:07:26.450 "copy": true, 00:07:26.450 "nvme_iov_md": false 00:07:26.450 }, 00:07:26.450 "driver_specific": { 00:07:26.450 "gpt": { 00:07:26.450 "base_bdev": "Nvme1n1", 00:07:26.450 "offset_blocks": 256, 00:07:26.450 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:26.450 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:26.450 "partition_name": "SPDK_TEST_first" 00:07:26.450 } 00:07:26.450 } 00:07:26.450 } 00:07:26.450 ]' 00:07:26.450 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:26.711 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:26.711 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:26.711 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:26.711 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:26.711 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:26.711 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:26.712 { 00:07:26.712 "name": "Nvme1n1p2", 00:07:26.712 "aliases": [ 00:07:26.712 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:26.712 ], 00:07:26.712 "product_name": "GPT Disk", 00:07:26.712 "block_size": 4096, 00:07:26.712 "num_blocks": 655103, 00:07:26.712 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:26.712 "assigned_rate_limits": { 00:07:26.712 "rw_ios_per_sec": 0, 00:07:26.712 "rw_mbytes_per_sec": 0, 00:07:26.712 "r_mbytes_per_sec": 0, 00:07:26.712 "w_mbytes_per_sec": 0 00:07:26.712 }, 00:07:26.712 "claimed": false, 00:07:26.712 "zoned": false, 00:07:26.712 "supported_io_types": { 00:07:26.712 "read": true, 00:07:26.712 "write": true, 00:07:26.712 "unmap": true, 00:07:26.712 "flush": true, 00:07:26.712 "reset": true, 00:07:26.712 "nvme_admin": false, 00:07:26.712 "nvme_io": false, 00:07:26.712 "nvme_io_md": false, 00:07:26.712 "write_zeroes": true, 00:07:26.712 "zcopy": false, 00:07:26.712 "get_zone_info": false, 00:07:26.712 "zone_management": false, 00:07:26.712 "zone_append": false, 00:07:26.712 "compare": true, 00:07:26.712 "compare_and_write": false, 00:07:26.712 "abort": true, 00:07:26.712 "seek_hole": false, 00:07:26.712 "seek_data": false, 00:07:26.712 "copy": true, 00:07:26.712 "nvme_iov_md": false 00:07:26.712 }, 00:07:26.712 "driver_specific": { 00:07:26.712 "gpt": { 00:07:26.712 "base_bdev": "Nvme1n1", 00:07:26.712 "offset_blocks": 655360, 00:07:26.712 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:26.712 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:26.712 "partition_name": "SPDK_TEST_second" 00:07:26.712 } 00:07:26.712 } 00:07:26.712 } 00:07:26.712 ]' 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62113 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62113 ']' 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62113 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62113 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:26.712 killing process with pid 62113 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62113' 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62113 00:07:26.712 11:50:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62113 00:07:28.629 00:07:28.629 real 0m3.006s 00:07:28.629 user 0m3.147s 00:07:28.629 sys 0m0.368s 00:07:28.629 11:50:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.629 ************************************ 00:07:28.629 11:50:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.629 END TEST bdev_gpt_uuid 00:07:28.629 ************************************ 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:28.629 11:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:28.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:28.891 Waiting for block devices as requested 00:07:28.891 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.891 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.891 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.152 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.449 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:34.449 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:34.449 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:34.449 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:34.449 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:34.449 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:34.449 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:34.449 11:50:32 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:34.449 00:07:34.449 real 0m55.413s 00:07:34.449 user 1m9.653s 00:07:34.449 sys 0m7.675s 00:07:34.449 11:50:32 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.449 ************************************ 00:07:34.449 END TEST blockdev_nvme_gpt 00:07:34.449 ************************************ 00:07:34.449 11:50:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.449 11:50:32 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:34.449 11:50:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.449 11:50:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.450 11:50:32 -- common/autotest_common.sh@10 -- # set +x 00:07:34.450 ************************************ 00:07:34.450 START TEST nvme 00:07:34.450 ************************************ 00:07:34.450 11:50:32 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:34.450 * Looking for test storage... 00:07:34.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:34.450 11:50:32 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:34.450 11:50:32 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:34.450 11:50:32 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:34.712 11:50:32 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:34.712 11:50:32 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.712 11:50:32 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.712 11:50:32 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.712 11:50:32 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.712 11:50:32 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.712 11:50:32 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.712 11:50:32 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.712 11:50:32 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.712 11:50:32 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.712 11:50:32 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.712 11:50:32 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.712 11:50:32 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:34.712 11:50:32 nvme -- scripts/common.sh@345 -- # : 1 00:07:34.712 11:50:32 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.712 11:50:32 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.712 11:50:32 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:34.712 11:50:32 nvme -- scripts/common.sh@353 -- # local d=1 00:07:34.712 11:50:32 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.712 11:50:32 nvme -- scripts/common.sh@355 -- # echo 1 00:07:34.712 11:50:32 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.712 11:50:32 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:34.712 11:50:32 nvme -- scripts/common.sh@353 -- # local d=2 00:07:34.712 11:50:32 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.712 11:50:32 nvme -- scripts/common.sh@355 -- # echo 2 00:07:34.712 11:50:32 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.712 11:50:32 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.712 11:50:32 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.712 11:50:32 nvme -- scripts/common.sh@368 -- # return 0 00:07:34.712 11:50:32 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.712 11:50:32 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:34.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.712 --rc genhtml_branch_coverage=1 00:07:34.712 --rc genhtml_function_coverage=1 00:07:34.712 --rc genhtml_legend=1 00:07:34.712 --rc geninfo_all_blocks=1 00:07:34.712 --rc geninfo_unexecuted_blocks=1 00:07:34.712 00:07:34.712 ' 00:07:34.712 11:50:32 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:34.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.712 --rc genhtml_branch_coverage=1 00:07:34.712 --rc genhtml_function_coverage=1 00:07:34.712 --rc genhtml_legend=1 00:07:34.712 --rc geninfo_all_blocks=1 00:07:34.712 --rc geninfo_unexecuted_blocks=1 00:07:34.712 00:07:34.712 ' 00:07:34.712 11:50:32 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:34.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.712 --rc genhtml_branch_coverage=1 00:07:34.712 --rc genhtml_function_coverage=1 00:07:34.712 --rc genhtml_legend=1 00:07:34.712 --rc geninfo_all_blocks=1 00:07:34.712 --rc geninfo_unexecuted_blocks=1 00:07:34.712 00:07:34.712 ' 00:07:34.712 11:50:32 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:34.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.712 --rc genhtml_branch_coverage=1 00:07:34.712 --rc genhtml_function_coverage=1 00:07:34.712 --rc genhtml_legend=1 00:07:34.712 --rc geninfo_all_blocks=1 00:07:34.712 --rc geninfo_unexecuted_blocks=1 00:07:34.712 00:07:34.712 ' 00:07:34.712 11:50:32 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:34.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.547 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.547 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.547 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.809 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.809 11:50:33 nvme -- nvme/nvme.sh@79 -- # uname 00:07:35.809 11:50:33 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:35.809 11:50:33 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:35.809 11:50:33 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1073 -- # stubpid=62747 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:07:35.809 Waiting for stub to ready for secondary processes... 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62747 ]] 00:07:35.809 11:50:33 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:07:35.809 [2024-11-18 11:50:33.357534] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:07:35.809 [2024-11-18 11:50:33.357671] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:36.752 [2024-11-18 11:50:34.138461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.752 [2024-11-18 11:50:34.233137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.752 [2024-11-18 11:50:34.233434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.752 [2024-11-18 11:50:34.233535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.752 [2024-11-18 11:50:34.248029] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:36.752 [2024-11-18 11:50:34.248067] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.752 [2024-11-18 11:50:34.261422] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:36.752 [2024-11-18 11:50:34.261642] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:36.752 [2024-11-18 11:50:34.267115] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.752 [2024-11-18 11:50:34.267381] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:36.752 [2024-11-18 11:50:34.267467] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:36.752 [2024-11-18 11:50:34.271431] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.752 [2024-11-18 11:50:34.271707] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:36.752 [2024-11-18 11:50:34.271780] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:36.752 [2024-11-18 11:50:34.275709] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.752 [2024-11-18 11:50:34.275946] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:36.752 [2024-11-18 11:50:34.276037] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:36.752 [2024-11-18 11:50:34.276130] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:36.752 [2024-11-18 11:50:34.276224] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:36.752 done. 00:07:36.752 11:50:34 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:36.752 11:50:34 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:07:36.752 11:50:34 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:36.752 11:50:34 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:07:36.752 11:50:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.752 11:50:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.752 ************************************ 00:07:36.752 START TEST nvme_reset 00:07:36.752 ************************************ 00:07:36.752 11:50:34 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:37.010 Initializing NVMe Controllers 00:07:37.010 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:37.010 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:37.010 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:37.010 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:37.010 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:37.010 00:07:37.010 real 0m0.209s 00:07:37.010 user 0m0.075s 00:07:37.010 sys 0m0.089s 00:07:37.010 11:50:34 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:37.010 ************************************ 00:07:37.010 END TEST nvme_reset 00:07:37.010 ************************************ 00:07:37.010 11:50:34 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:37.010 11:50:34 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:37.010 11:50:34 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:37.010 11:50:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:37.010 11:50:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.010 ************************************ 00:07:37.010 START TEST nvme_identify 00:07:37.010 ************************************ 00:07:37.010 11:50:34 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:07:37.010 11:50:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:37.010 11:50:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:37.010 11:50:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:37.010 11:50:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:37.010 11:50:34 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:37.010 11:50:34 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:37.010 11:50:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:37.010 11:50:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:37.010 11:50:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:37.010 11:50:34 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:37.010 11:50:34 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:37.010 11:50:34 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:37.271 ===================================================== 00:07:37.271 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:37.271 ===================================================== 00:07:37.271 Controller Capabilities/Features 00:07:37.271 ================================ 00:07:37.271 Vendor ID: 1b36 00:07:37.271 Subsystem Vendor ID: 1af4 00:07:37.271 Serial Number: 12341 00:07:37.271 Model Number: QEMU NVMe Ctrl 00:07:37.271 Firmware Version: 8.0.0 00:07:37.271 Recommended Arb Burst: 6 00:07:37.271 IEEE OUI Identifier: 00 54 52 00:07:37.271 Multi-path I/O 00:07:37.271 May have multiple subsystem ports: No 00:07:37.271 May have multiple controllers: No 00:07:37.271 Associated with SR-IOV VF: No 00:07:37.271 Max Data Transfer Size: 524288 00:07:37.271 Max Number of Namespaces: 256 00:07:37.271 Max Number of I/O Queues: 64 00:07:37.271 NVMe Specification Version (VS): 1.4 00:07:37.271 NVMe Specification Version (Identify): 1.4 00:07:37.271 Maximum Queue Entries: 2048 00:07:37.271 Contiguous Queues Required: Yes 00:07:37.271 Arbitration Mechanisms Supported 00:07:37.271 Weighted Round Robin: Not Supported 00:07:37.271 Vendor Specific: Not Supported 00:07:37.271 Reset Timeout: 7500 ms 00:07:37.271 Doorbell Stride: 4 bytes 00:07:37.271 NVM Subsystem Reset: Not Supported 00:07:37.271 Command Sets Supported 00:07:37.271 NVM Command Set: Supported 00:07:37.271 Boot Partition: Not Supported 00:07:37.271 Memory Page Size Minimum: 4096 bytes 00:07:37.271 Memory Page Size Maximum: 65536 bytes 00:07:37.271 Persistent Memory Region: Not Supported 00:07:37.271 Optional Asynchronous Events Supported 00:07:37.271 Namespace Attribute Notices: Supported 00:07:37.271 Firmware Activation Notices: Not Supported 00:07:37.271 ANA Change Notices: Not Supported 00:07:37.271 PLE Aggregate Log Change Notices: Not Supported 00:07:37.271 LBA Status Info Alert Notices: Not Supported 00:07:37.271 EGE Aggregate Log Change Notices: Not Supported 00:07:37.271 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.271 Zone Descriptor Change Notices: Not Supported 00:07:37.271 Discovery Log Change Notices: Not Supported 00:07:37.271 Controller Attributes 00:07:37.271 128-bit Host Identifier: Not Supported 00:07:37.271 Non-Operational Permissive Mode: Not Supported 00:07:37.271 NVM Sets: Not Supported 00:07:37.271 Read Recovery Levels: Not Supported 00:07:37.271 Endurance Groups: Not Supported 00:07:37.271 Predictable Latency Mode: Not Supported 00:07:37.271 Traffic Based Keep ALive: Not Supported 00:07:37.271 Namespace Granularity: Not Supported 00:07:37.271 SQ Associations: Not Supported 00:07:37.271 UUID List: Not Supported 00:07:37.271 Multi-Domain Subsystem: Not Supported 00:07:37.271 Fixed Capacity Management: Not Supported 00:07:37.271 Variable Capacity Management: Not Supported 00:07:37.271 Delete Endurance Group: Not Supported 00:07:37.271 Delete NVM Set: Not Supported 00:07:37.271 Extended LBA Formats Supported: Supported 00:07:37.271 Flexible Data Placement Supported: Not Supported 00:07:37.271 00:07:37.271 Controller Memory Buffer Support 00:07:37.271 ================================ 00:07:37.271 Supported: No 00:07:37.271 00:07:37.271 Persistent Memory Region Support 00:07:37.271 ================================ 00:07:37.271 Supported: No 00:07:37.271 00:07:37.271 Admin Command Set Attributes 00:07:37.271 ============================ 00:07:37.271 Security Send/Receive: Not Supported 00:07:37.271 Format NVM: Supported 00:07:37.271 Firmware Activate/Download: Not Supported 00:07:37.271 Namespace Management: Supported 00:07:37.271 Device Self-Test: Not Supported 00:07:37.271 Directives: Supported 00:07:37.271 NVMe-MI: Not Supported 00:07:37.271 Virtualization Management: Not Supported 00:07:37.271 Doorbell Buffer Config: Supported 00:07:37.271 Get LBA Status Capability: Not Supported 00:07:37.271 Command & Feature Lockdown Capability: Not Supported 00:07:37.271 Abort Command Limit: 4 00:07:37.271 Async Event Request Limit: 4 00:07:37.271 Number of Firmware Slots: N/A 00:07:37.271 Firmware Slot 1 Read-Only: N/A 00:07:37.271 Firmware Activation Without Reset: N/A 00:07:37.271 Multiple Update Detection Support: N/A 00:07:37.271 Firmware Update Gr[2024-11-18 11:50:34.827961] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62768 terminated unexpected 00:07:37.271 anularity: No Information Provided 00:07:37.271 Per-Namespace SMART Log: Yes 00:07:37.271 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.271 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:37.271 Command Effects Log Page: Supported 00:07:37.271 Get Log Page Extended Data: Supported 00:07:37.271 Telemetry Log Pages: Not Supported 00:07:37.271 Persistent Event Log Pages: Not Supported 00:07:37.271 Supported Log Pages Log Page: May Support 00:07:37.271 Commands Supported & Effects Log Page: Not Supported 00:07:37.271 Feature Identifiers & Effects Log Page:May Support 00:07:37.271 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.271 Data Area 4 for Telemetry Log: Not Supported 00:07:37.271 Error Log Page Entries Supported: 1 00:07:37.271 Keep Alive: Not Supported 00:07:37.271 00:07:37.271 NVM Command Set Attributes 00:07:37.271 ========================== 00:07:37.271 Submission Queue Entry Size 00:07:37.271 Max: 64 00:07:37.271 Min: 64 00:07:37.271 Completion Queue Entry Size 00:07:37.271 Max: 16 00:07:37.271 Min: 16 00:07:37.271 Number of Namespaces: 256 00:07:37.271 Compare Command: Supported 00:07:37.271 Write Uncorrectable Command: Not Supported 00:07:37.271 Dataset Management Command: Supported 00:07:37.271 Write Zeroes Command: Supported 00:07:37.271 Set Features Save Field: Supported 00:07:37.271 Reservations: Not Supported 00:07:37.271 Timestamp: Supported 00:07:37.271 Copy: Supported 00:07:37.271 Volatile Write Cache: Present 00:07:37.271 Atomic Write Unit (Normal): 1 00:07:37.271 Atomic Write Unit (PFail): 1 00:07:37.271 Atomic Compare & Write Unit: 1 00:07:37.271 Fused Compare & Write: Not Supported 00:07:37.271 Scatter-Gather List 00:07:37.271 SGL Command Set: Supported 00:07:37.271 SGL Keyed: Not Supported 00:07:37.271 SGL Bit Bucket Descriptor: Not Supported 00:07:37.271 SGL Metadata Pointer: Not Supported 00:07:37.271 Oversized SGL: Not Supported 00:07:37.271 SGL Metadata Address: Not Supported 00:07:37.271 SGL Offset: Not Supported 00:07:37.271 Transport SGL Data Block: Not Supported 00:07:37.271 Replay Protected Memory Block: Not Supported 00:07:37.271 00:07:37.271 Firmware Slot Information 00:07:37.271 ========================= 00:07:37.271 Active slot: 1 00:07:37.271 Slot 1 Firmware Revision: 1.0 00:07:37.271 00:07:37.271 00:07:37.271 Commands Supported and Effects 00:07:37.271 ============================== 00:07:37.271 Admin Commands 00:07:37.271 -------------- 00:07:37.271 Delete I/O Submission Queue (00h): Supported 00:07:37.271 Create I/O Submission Queue (01h): Supported 00:07:37.271 Get Log Page (02h): Supported 00:07:37.271 Delete I/O Completion Queue (04h): Supported 00:07:37.271 Create I/O Completion Queue (05h): Supported 00:07:37.271 Identify (06h): Supported 00:07:37.271 Abort (08h): Supported 00:07:37.271 Set Features (09h): Supported 00:07:37.271 Get Features (0Ah): Supported 00:07:37.271 Asynchronous Event Request (0Ch): Supported 00:07:37.271 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.271 Directive Send (19h): Supported 00:07:37.272 Directive Receive (1Ah): Supported 00:07:37.272 Virtualization Management (1Ch): Supported 00:07:37.272 Doorbell Buffer Config (7Ch): Supported 00:07:37.272 Format NVM (80h): Supported LBA-Change 00:07:37.272 I/O Commands 00:07:37.272 ------------ 00:07:37.272 Flush (00h): Supported LBA-Change 00:07:37.272 Write (01h): Supported LBA-Change 00:07:37.272 Read (02h): Supported 00:07:37.272 Compare (05h): Supported 00:07:37.272 Write Zeroes (08h): Supported LBA-Change 00:07:37.272 Dataset Management (09h): Supported LBA-Change 00:07:37.272 Unknown (0Ch): Supported 00:07:37.272 Unknown (12h): Supported 00:07:37.272 Copy (19h): Supported LBA-Change 00:07:37.272 Unknown (1Dh): Supported LBA-Change 00:07:37.272 00:07:37.272 Error Log 00:07:37.272 ========= 00:07:37.272 00:07:37.272 Arbitration 00:07:37.272 =========== 00:07:37.272 Arbitration Burst: no limit 00:07:37.272 00:07:37.272 Power Management 00:07:37.272 ================ 00:07:37.272 Number of Power States: 1 00:07:37.272 Current Power State: Power State #0 00:07:37.272 Power State #0: 00:07:37.272 Max Power: 25.00 W 00:07:37.272 Non-Operational State: Operational 00:07:37.272 Entry Latency: 16 microseconds 00:07:37.272 Exit Latency: 4 microseconds 00:07:37.272 Relative Read Throughput: 0 00:07:37.272 Relative Read Latency: 0 00:07:37.272 Relative Write Throughput: 0 00:07:37.272 Relative Write Latency: 0 00:07:37.272 Idle Power[2024-11-18 11:50:34.828956] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62768 terminated unexpected 00:07:37.272 : Not Reported 00:07:37.272 Active Power: Not Reported 00:07:37.272 Non-Operational Permissive Mode: Not Supported 00:07:37.272 00:07:37.272 Health Information 00:07:37.272 ================== 00:07:37.272 Critical Warnings: 00:07:37.272 Available Spare Space: OK 00:07:37.272 Temperature: OK 00:07:37.272 Device Reliability: OK 00:07:37.272 Read Only: No 00:07:37.272 Volatile Memory Backup: OK 00:07:37.272 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.272 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.272 Available Spare: 0% 00:07:37.272 Available Spare Threshold: 0% 00:07:37.272 Life Percentage Used: 0% 00:07:37.272 Data Units Read: 1159 00:07:37.272 Data Units Written: 1026 00:07:37.272 Host Read Commands: 61392 00:07:37.272 Host Write Commands: 60163 00:07:37.272 Controller Busy Time: 0 minutes 00:07:37.272 Power Cycles: 0 00:07:37.272 Power On Hours: 0 hours 00:07:37.272 Unsafe Shutdowns: 0 00:07:37.272 Unrecoverable Media Errors: 0 00:07:37.272 Lifetime Error Log Entries: 0 00:07:37.272 Warning Temperature Time: 0 minutes 00:07:37.272 Critical Temperature Time: 0 minutes 00:07:37.272 00:07:37.272 Number of Queues 00:07:37.272 ================ 00:07:37.272 Number of I/O Submission Queues: 64 00:07:37.272 Number of I/O Completion Queues: 64 00:07:37.272 00:07:37.272 ZNS Specific Controller Data 00:07:37.272 ============================ 00:07:37.272 Zone Append Size Limit: 0 00:07:37.272 00:07:37.272 00:07:37.272 Active Namespaces 00:07:37.272 ================= 00:07:37.272 Namespace ID:1 00:07:37.272 Error Recovery Timeout: Unlimited 00:07:37.272 Command Set Identifier: NVM (00h) 00:07:37.272 Deallocate: Supported 00:07:37.272 Deallocated/Unwritten Error: Supported 00:07:37.272 Deallocated Read Value: All 0x00 00:07:37.272 Deallocate in Write Zeroes: Not Supported 00:07:37.272 Deallocated Guard Field: 0xFFFF 00:07:37.272 Flush: Supported 00:07:37.272 Reservation: Not Supported 00:07:37.272 Namespace Sharing Capabilities: Private 00:07:37.272 Size (in LBAs): 1310720 (5GiB) 00:07:37.272 Capacity (in LBAs): 1310720 (5GiB) 00:07:37.272 Utilization (in LBAs): 1310720 (5GiB) 00:07:37.272 Thin Provisioning: Not Supported 00:07:37.272 Per-NS Atomic Units: No 00:07:37.272 Maximum Single Source Range Length: 128 00:07:37.272 Maximum Copy Length: 128 00:07:37.272 Maximum Source Range Count: 128 00:07:37.272 NGUID/EUI64 Never Reused: No 00:07:37.272 Namespace Write Protected: No 00:07:37.272 Number of LBA Formats: 8 00:07:37.272 Current LBA Format: LBA Format #04 00:07:37.272 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.272 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.272 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.272 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.272 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.272 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.272 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.272 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.272 00:07:37.272 NVM Specific Namespace Data 00:07:37.272 =========================== 00:07:37.272 Logical Block Storage Tag Mask: 0 00:07:37.272 Protection Information Capabilities: 00:07:37.272 16b Guard Protection Information Storage Tag Support: No 00:07:37.272 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.272 Storage Tag Check Read Support: No 00:07:37.272 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.272 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.272 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.272 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.272 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.272 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.272 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.272 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.272 ===================================================== 00:07:37.272 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:37.272 ===================================================== 00:07:37.272 Controller Capabilities/Features 00:07:37.272 ================================ 00:07:37.272 Vendor ID: 1b36 00:07:37.272 Subsystem Vendor ID: 1af4 00:07:37.272 Serial Number: 12343 00:07:37.272 Model Number: QEMU NVMe Ctrl 00:07:37.272 Firmware Version: 8.0.0 00:07:37.272 Recommended Arb Burst: 6 00:07:37.272 IEEE OUI Identifier: 00 54 52 00:07:37.272 Multi-path I/O 00:07:37.272 May have multiple subsystem ports: No 00:07:37.272 May have multiple controllers: Yes 00:07:37.272 Associated with SR-IOV VF: No 00:07:37.272 Max Data Transfer Size: 524288 00:07:37.272 Max Number of Namespaces: 256 00:07:37.272 Max Number of I/O Queues: 64 00:07:37.272 NVMe Specification Version (VS): 1.4 00:07:37.272 NVMe Specification Version (Identify): 1.4 00:07:37.272 Maximum Queue Entries: 2048 00:07:37.272 Contiguous Queues Required: Yes 00:07:37.272 Arbitration Mechanisms Supported 00:07:37.272 Weighted Round Robin: Not Supported 00:07:37.272 Vendor Specific: Not Supported 00:07:37.272 Reset Timeout: 7500 ms 00:07:37.272 Doorbell Stride: 4 bytes 00:07:37.272 NVM Subsystem Reset: Not Supported 00:07:37.272 Command Sets Supported 00:07:37.272 NVM Command Set: Supported 00:07:37.272 Boot Partition: Not Supported 00:07:37.272 Memory Page Size Minimum: 4096 bytes 00:07:37.272 Memory Page Size Maximum: 65536 bytes 00:07:37.272 Persistent Memory Region: Not Supported 00:07:37.272 Optional Asynchronous Events Supported 00:07:37.272 Namespace Attribute Notices: Supported 00:07:37.272 Firmware Activation Notices: Not Supported 00:07:37.272 ANA Change Notices: Not Supported 00:07:37.272 PLE Aggregate Log Change Notices: Not Supported 00:07:37.272 LBA Status Info Alert Notices: Not Supported 00:07:37.272 EGE Aggregate Log Change Notices: Not Supported 00:07:37.272 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.272 Zone Descriptor Change Notices: Not Supported 00:07:37.272 Discovery Log Change Notices: Not Supported 00:07:37.272 Controller Attributes 00:07:37.272 128-bit Host Identifier: Not Supported 00:07:37.272 Non-Operational Permissive Mode: Not Supported 00:07:37.272 NVM Sets: Not Supported 00:07:37.272 Read Recovery Levels: Not Supported 00:07:37.272 Endurance Groups: Supported 00:07:37.272 Predictable Latency Mode: Not Supported 00:07:37.272 Traffic Based Keep ALive: Not Supported 00:07:37.272 Namespace Granularity: Not Supported 00:07:37.272 SQ Associations: Not Supported 00:07:37.272 UUID List: Not Supported 00:07:37.272 Multi-Domain Subsystem: Not Supported 00:07:37.272 Fixed Capacity Management: Not Supported 00:07:37.272 Variable Capacity Management: Not Supported 00:07:37.272 Delete Endurance Group: Not Supported 00:07:37.272 Delete NVM Set: Not Supported 00:07:37.272 Extended LBA Formats Supported: Supported 00:07:37.272 Flexible Data Placement Supported: Supported 00:07:37.272 00:07:37.272 Controller Memory Buffer Support 00:07:37.272 ================================ 00:07:37.273 Supported: No 00:07:37.273 00:07:37.273 Persistent Memory Region Support 00:07:37.273 ================================ 00:07:37.273 Supported: No 00:07:37.273 00:07:37.273 Admin Command Set Attributes 00:07:37.273 ============================ 00:07:37.273 Security Send/Receive: Not Supported 00:07:37.273 Format NVM: Supported 00:07:37.273 Firmware Activate/Download: Not Supported 00:07:37.273 Namespace Management: Supported 00:07:37.273 Device Self-Test: Not Supported 00:07:37.273 Directives: Supported 00:07:37.273 NVMe-MI: Not Supported 00:07:37.273 Virtualization Management: Not Supported 00:07:37.273 Doorbell Buffer Config: Supported 00:07:37.273 Get LBA Status Capability: Not Supported 00:07:37.273 Command & Feature Lockdown Capability: Not Supported 00:07:37.273 Abort Command Limit: 4 00:07:37.273 Async Event Request Limit: 4 00:07:37.273 Number of Firmware Slots: N/A 00:07:37.273 Firmware Slot 1 Read-Only: N/A 00:07:37.273 Firmware Activation Without Reset: N/A 00:07:37.273 Multiple Update Detection Support: N/A 00:07:37.273 Firmware Update Granularity: No Information Provided 00:07:37.273 Per-Namespace SMART Log: Yes 00:07:37.273 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.273 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:37.273 Command Effects Log Page: Supported 00:07:37.273 Get Log Page Extended Data: Supported 00:07:37.273 Telemetry Log Pages: Not Supported 00:07:37.273 Persistent Event Log Pages: Not Supported 00:07:37.273 Supported Log Pages Log Page: May Support 00:07:37.273 Commands Supported & Effects Log Page: Not Supported 00:07:37.273 Feature Identifiers & Effects Log Page:May Support 00:07:37.273 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.273 Data Area 4 for Telemetry Log: Not Supported 00:07:37.273 Error Log Page Entries Supported: 1 00:07:37.273 Keep Alive: Not Supported 00:07:37.273 00:07:37.273 NVM Command Set Attributes 00:07:37.273 ========================== 00:07:37.273 Submission Queue Entry Size 00:07:37.273 Max: 64 00:07:37.273 Min: 64 00:07:37.273 Completion Queue Entry Size 00:07:37.273 Max: 16 00:07:37.273 Min: 16 00:07:37.273 Number of Namespaces: 256 00:07:37.273 Compare Command: Supported 00:07:37.273 Write Uncorrectable Command: Not Supported 00:07:37.273 Dataset Management Command: Supported 00:07:37.273 Write Zeroes Command: Supported 00:07:37.273 Set Features Save Field: Supported 00:07:37.273 Reservations: Not Supported 00:07:37.273 Timestamp: Supported 00:07:37.273 Copy: Supported 00:07:37.273 Volatile Write Cache: Present 00:07:37.273 Atomic Write Unit (Normal): 1 00:07:37.273 Atomic Write Unit (PFail): 1 00:07:37.273 Atomic Compare & Write Unit: 1 00:07:37.273 Fused Compare & Write: Not Supported 00:07:37.273 Scatter-Gather List 00:07:37.273 SGL Command Set: Supported 00:07:37.273 SGL Keyed: Not Supported 00:07:37.273 SGL Bit Bucket Descriptor: Not Supported 00:07:37.273 SGL Metadata Pointer: Not Supported 00:07:37.273 Oversized SGL: Not Supported 00:07:37.273 SGL Metadata Address: Not Supported 00:07:37.273 SGL Offset: Not Supported 00:07:37.273 Transport SGL Data Block: Not Supported 00:07:37.273 Replay Protected Memory Block: Not Supported 00:07:37.273 00:07:37.273 Firmware Slot Information 00:07:37.273 ========================= 00:07:37.273 Active slot: 1 00:07:37.273 Slot 1 Firmware Revision: 1.0 00:07:37.273 00:07:37.273 00:07:37.273 Commands Supported and Effects 00:07:37.273 ============================== 00:07:37.273 Admin Commands 00:07:37.273 -------------- 00:07:37.273 Delete I/O Submission Queue (00h): Supported 00:07:37.273 Create I/O Submission Queue (01h): Supported 00:07:37.273 Get Log Page (02h): Supported 00:07:37.273 Delete I/O Completion Queue (04h): Supported 00:07:37.273 Create I/O Completion Queue (05h): Supported 00:07:37.273 Identify (06h): Supported 00:07:37.273 Abort (08h): Supported 00:07:37.273 Set Features (09h): Supported 00:07:37.273 Get Features (0Ah): Supported 00:07:37.273 Asynchronous Event Request (0Ch): Supported 00:07:37.273 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.273 Directive Send (19h): Supported 00:07:37.273 Directive Receive (1Ah): Supported 00:07:37.273 Virtualization Management (1Ch): Supported 00:07:37.273 Doorbell Buffer Config (7Ch): Supported 00:07:37.273 Format NVM (80h): Supported LBA-Change 00:07:37.273 I/O Commands 00:07:37.273 ------------ 00:07:37.273 Flush (00h): Supported LBA-Change 00:07:37.273 Write (01h): Supported LBA-Change 00:07:37.273 Read (02h): Supported 00:07:37.273 Compare (05h): Supported 00:07:37.273 Write Zeroes (08h): Supported LBA-Change 00:07:37.273 Dataset Management (09h): Supported LBA-Change 00:07:37.273 Unknown (0Ch): Supported 00:07:37.273 Unknown (12h): Supported 00:07:37.273 Copy (19h): Supported LBA-Change 00:07:37.273 Unknown (1Dh): Supported LBA-Change 00:07:37.273 00:07:37.273 Error Log 00:07:37.273 ========= 00:07:37.273 00:07:37.273 Arbitration 00:07:37.273 =========== 00:07:37.273 Arbitration Burst: no limit 00:07:37.273 00:07:37.273 Power Management 00:07:37.273 ================ 00:07:37.273 Number of Power States: 1 00:07:37.273 Current Power State: Power State #0 00:07:37.273 Power State #0: 00:07:37.273 Max Power: 25.00 W 00:07:37.273 Non-Operational State: Operational 00:07:37.273 Entry Latency: 16 microseconds 00:07:37.273 Exit Latency: 4 microseconds 00:07:37.273 Relative Read Throughput: 0 00:07:37.273 Relative Read Latency: 0 00:07:37.273 Relative Write Throughput: 0 00:07:37.273 Relative Write Latency: 0 00:07:37.273 Idle Power: Not Reported 00:07:37.273 Active Power: Not Reported 00:07:37.273 Non-Operational Permissive Mode: Not Supported 00:07:37.273 00:07:37.273 Health Information 00:07:37.273 ================== 00:07:37.273 Critical Warnings: 00:07:37.273 Available Spare Space: OK 00:07:37.273 Temperature: OK 00:07:37.273 Device Reliability: OK 00:07:37.273 Read Only: No 00:07:37.273 Volatile Memory Backup: OK 00:07:37.273 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.273 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.273 Available Spare: 0% 00:07:37.273 Available Spare Threshold: 0% 00:07:37.273 Life Percentage Used: 0% 00:07:37.273 Data Units Read: 958 00:07:37.273 Data Units Written: 887 00:07:37.273 Host Read Commands: 43624 00:07:37.273 Host Write Commands: 43049 00:07:37.273 Controller Busy Time: 0 minutes 00:07:37.273 Power Cycles: 0 00:07:37.273 Power On Hours: 0 hours 00:07:37.273 Unsafe Shutdowns: 0 00:07:37.273 Unrecoverable Media Errors: 0 00:07:37.273 Lifetime Error Log Entries: 0 00:07:37.273 Warning Temperature Time: 0 minutes 00:07:37.273 Critical Temperature Time: 0 minutes 00:07:37.273 00:07:37.273 Number of Queues 00:07:37.273 ================ 00:07:37.273 Number of I/O Submission Queues: 64 00:07:37.273 Number of I/O Completion Queues: 64 00:07:37.273 00:07:37.273 ZNS Specific Controller Data 00:07:37.273 ============================ 00:07:37.273 Zone Append Size Limit: 0 00:07:37.273 00:07:37.273 00:07:37.273 Active Namespaces 00:07:37.273 ================= 00:07:37.273 Namespace ID:1 00:07:37.273 Error Recovery Timeout: Unlimited 00:07:37.273 Command Set Identifier: NVM (00h) 00:07:37.273 Deallocate: Supported 00:07:37.273 Deallocated/Unwritten Error: Supported 00:07:37.273 Deallocated Read Value: All 0x00 00:07:37.273 Deallocate in Write Zeroes: Not Supported 00:07:37.273 Deallocated Guard Field: 0xFFFF 00:07:37.273 Flush: Supported 00:07:37.273 Reservation: Not Supported 00:07:37.273 Namespace Sharing Capabilities: Multiple Controllers 00:07:37.273 Size (in LBAs): 262144 (1GiB) 00:07:37.273 Capacity (in LBAs): 262144 (1GiB) 00:07:37.273 Utilization (in LBAs): 262144 (1GiB) 00:07:37.273 Thin Provisioning: Not Supported 00:07:37.273 Per-NS Atomic Units: No 00:07:37.273 Maximum Single Source Range Length: 128 00:07:37.273 Maximum Copy Length: 128 00:07:37.273 Maximum Source Range Count: 128 00:07:37.273 NGUID/EUI64 Never Reused: No 00:07:37.273 Namespace Write Protected: No 00:07:37.273 Endurance group ID: 1 00:07:37.273 Number of LBA Formats: 8 00:07:37.273 Current LBA Format: LBA Format #04 00:07:37.273 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.273 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.273 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.273 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.273 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.274 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.274 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.274 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.274 00:07:37.274 Get Feature FDP: 00:07:37.274 ================ 00:07:37.274 Enabled: Yes 00:07:37.274 FDP configuration index: 0 00:07:37.274 00:07:37.274 FDP configurations log page 00:07:37.274 =========================== 00:07:37.274 Number of FDP configurations: 1 00:07:37.274 Version: 0 00:07:37.274 Size: 112 00:07:37.274 FDP Configuration Descriptor: 0 00:07:37.274 Descriptor Size: 96 00:07:37.274 Reclaim Group Identifier format: 2 00:07:37.274 FDP Volatile Write Cache: Not Present 00:07:37.274 FDP Configuration: Valid 00:07:37.274 Vendor Specific Size: 0 00:07:37.274 Number of Reclaim Groups: 2 00:07:37.274 Number of Recalim Unit Handles: 8 00:07:37.274 Max Placement Identifiers: 128 00:07:37.274 Number of Namespaces Suppprted: 256 00:07:37.274 Reclaim unit Nominal Size: 6000000 bytes 00:07:37.274 Estimated Reclaim Unit Time Limit: Not Reported 00:07:37.274 RUH Desc #000: RUH Type: Initially Isolated 00:07:37.274 RUH Desc #001: RUH Type: Initially Isolated 00:07:37.274 RUH Desc #002: RUH Type: Initially Isolated 00:07:37.274 RUH Desc #003: RUH Type: Initially Isolated 00:07:37.274 RUH Desc #004: RUH Type: Initially Isolated 00:07:37.274 RUH Desc #005: RUH Type: Initially Isolated 00:07:37.274 RUH Desc #006: RUH Type: Initially Isolated 00:07:37.274 RUH Desc #007: RUH Type: Initially Isolated 00:07:37.274 00:07:37.274 FDP reclaim unit handle usage log page 00:07:37.274 ====================================== 00:07:37.274 Number of Reclaim Unit Handles: 8 00:07:37.274 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:37.274 RUH Usage Desc #001: RUH Attributes: Unused 00:07:37.274 RUH Usage Desc #002: RUH Attributes: Unused 00:07:37.274 RUH Usage Desc #003: RUH Attributes: Unused 00:07:37.274 RUH Usage Desc #004: RUH Attributes: Unused 00:07:37.274 [2024-11-18 11:50:34.830566] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62768 terminated unexpected 00:07:37.274 RUH Usage Desc #005: RUH Attributes: Unused 00:07:37.274 RUH Usage Desc #006: RUH Attributes: Unused 00:07:37.274 RUH Usage Desc #007: RUH Attributes: Unused 00:07:37.274 00:07:37.274 FDP statistics log page 00:07:37.274 ======================= 00:07:37.274 Host bytes with metadata written: 559194112 00:07:37.274 Media bytes with metadata written: 559271936 00:07:37.274 Media bytes erased: 0 00:07:37.274 00:07:37.274 FDP events log page 00:07:37.274 =================== 00:07:37.274 Number of FDP events: 0 00:07:37.274 00:07:37.274 NVM Specific Namespace Data 00:07:37.274 =========================== 00:07:37.274 Logical Block Storage Tag Mask: 0 00:07:37.274 Protection Information Capabilities: 00:07:37.274 16b Guard Protection Information Storage Tag Support: No 00:07:37.274 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.274 Storage Tag Check Read Support: No 00:07:37.274 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.274 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.274 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.274 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.274 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.274 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.274 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.274 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.274 ===================================================== 00:07:37.274 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:37.274 ===================================================== 00:07:37.274 Controller Capabilities/Features 00:07:37.274 ================================ 00:07:37.274 Vendor ID: 1b36 00:07:37.274 Subsystem Vendor ID: 1af4 00:07:37.274 Serial Number: 12340 00:07:37.274 Model Number: QEMU NVMe Ctrl 00:07:37.274 Firmware Version: 8.0.0 00:07:37.274 Recommended Arb Burst: 6 00:07:37.274 IEEE OUI Identifier: 00 54 52 00:07:37.274 Multi-path I/O 00:07:37.274 May have multiple subsystem ports: No 00:07:37.274 May have multiple controllers: No 00:07:37.274 Associated with SR-IOV VF: No 00:07:37.274 Max Data Transfer Size: 524288 00:07:37.274 Max Number of Namespaces: 256 00:07:37.274 Max Number of I/O Queues: 64 00:07:37.274 NVMe Specification Version (VS): 1.4 00:07:37.274 NVMe Specification Version (Identify): 1.4 00:07:37.274 Maximum Queue Entries: 2048 00:07:37.274 Contiguous Queues Required: Yes 00:07:37.274 Arbitration Mechanisms Supported 00:07:37.274 Weighted Round Robin: Not Supported 00:07:37.274 Vendor Specific: Not Supported 00:07:37.274 Reset Timeout: 7500 ms 00:07:37.274 Doorbell Stride: 4 bytes 00:07:37.274 NVM Subsystem Reset: Not Supported 00:07:37.274 Command Sets Supported 00:07:37.274 NVM Command Set: Supported 00:07:37.274 Boot Partition: Not Supported 00:07:37.274 Memory Page Size Minimum: 4096 bytes 00:07:37.274 Memory Page Size Maximum: 65536 bytes 00:07:37.274 Persistent Memory Region: Not Supported 00:07:37.274 Optional Asynchronous Events Supported 00:07:37.274 Namespace Attribute Notices: Supported 00:07:37.274 Firmware Activation Notices: Not Supported 00:07:37.274 ANA Change Notices: Not Supported 00:07:37.274 PLE Aggregate Log Change Notices: Not Supported 00:07:37.274 LBA Status Info Alert Notices: Not Supported 00:07:37.274 EGE Aggregate Log Change Notices: Not Supported 00:07:37.274 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.274 Zone Descriptor Change Notices: Not Supported 00:07:37.274 Discovery Log Change Notices: Not Supported 00:07:37.274 Controller Attributes 00:07:37.274 128-bit Host Identifier: Not Supported 00:07:37.274 Non-Operational Permissive Mode: Not Supported 00:07:37.274 NVM Sets: Not Supported 00:07:37.274 Read Recovery Levels: Not Supported 00:07:37.274 Endurance Groups: Not Supported 00:07:37.274 Predictable Latency Mode: Not Supported 00:07:37.274 Traffic Based Keep ALive: Not Supported 00:07:37.274 Namespace Granularity: Not Supported 00:07:37.274 SQ Associations: Not Supported 00:07:37.274 UUID List: Not Supported 00:07:37.274 Multi-Domain Subsystem: Not Supported 00:07:37.274 Fixed Capacity Management: Not Supported 00:07:37.274 Variable Capacity Management: Not Supported 00:07:37.274 Delete Endurance Group: Not Supported 00:07:37.274 Delete NVM Set: Not Supported 00:07:37.274 Extended LBA Formats Supported: Supported 00:07:37.274 Flexible Data Placement Supported: Not Supported 00:07:37.274 00:07:37.274 Controller Memory Buffer Support 00:07:37.274 ================================ 00:07:37.274 Supported: No 00:07:37.274 00:07:37.274 Persistent Memory Region Support 00:07:37.274 ================================ 00:07:37.274 Supported: No 00:07:37.274 00:07:37.274 Admin Command Set Attributes 00:07:37.274 ============================ 00:07:37.274 Security Send/Receive: Not Supported 00:07:37.274 Format NVM: Supported 00:07:37.274 Firmware Activate/Download: Not Supported 00:07:37.274 Namespace Management: Supported 00:07:37.274 Device Self-Test: Not Supported 00:07:37.274 Directives: Supported 00:07:37.274 NVMe-MI: Not Supported 00:07:37.274 Virtualization Management: Not Supported 00:07:37.274 Doorbell Buffer Config: Supported 00:07:37.274 Get LBA Status Capability: Not Supported 00:07:37.274 Command & Feature Lockdown Capability: Not Supported 00:07:37.274 Abort Command Limit: 4 00:07:37.274 Async Event Request Limit: 4 00:07:37.274 Number of Firmware Slots: N/A 00:07:37.274 Firmware Slot 1 Read-Only: N/A 00:07:37.274 Firmware Activation Without Reset: N/A 00:07:37.274 Multiple Update Detection Support: N/A 00:07:37.274 Firmware Update Granularity: No Information Provided 00:07:37.274 Per-Namespace SMART Log: Yes 00:07:37.274 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.274 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:37.274 Command Effects Log Page: Supported 00:07:37.274 Get Log Page Extended Data: Supported 00:07:37.274 Telemetry Log Pages: Not Supported 00:07:37.274 Persistent Event Log Pages: Not Supported 00:07:37.274 Supported Log Pages Log Page: May Support 00:07:37.274 Commands Supported & Effects Log Page: Not Supported 00:07:37.274 Feature Identifiers & Effects Log Page:May Support 00:07:37.274 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.274 Data Area 4 for Telemetry Log: Not Supported 00:07:37.274 Error Log Page Entries Supported: 1 00:07:37.274 Keep Alive: Not Supported 00:07:37.274 00:07:37.274 NVM Command Set Attributes 00:07:37.274 ========================== 00:07:37.274 Submission Queue Entry Size 00:07:37.274 Max: 64 00:07:37.274 Min: 64 00:07:37.274 Completion Queue Entry Size 00:07:37.275 Max: 16 00:07:37.275 Min: 16 00:07:37.275 Number of Namespaces: 256 00:07:37.275 Compare Command: Supported 00:07:37.275 Write Uncorrectable Command: Not Supported 00:07:37.275 Dataset Management Command: Supported 00:07:37.275 Write Zeroes Command: Supported 00:07:37.275 Set Features Save Field: Supported 00:07:37.275 Reservations: Not Supported 00:07:37.275 Timestamp: Supported 00:07:37.275 Copy: Supported 00:07:37.275 Volatile Write Cache: Present 00:07:37.275 Atomic Write Unit (Normal): 1 00:07:37.275 Atomic Write Unit (PFail): 1 00:07:37.275 Atomic Compare & Write Unit: 1 00:07:37.275 Fused Compare & Write: Not Supported 00:07:37.275 Scatter-Gather List 00:07:37.275 SGL Command Set: Supported 00:07:37.275 SGL Keyed: Not Supported 00:07:37.275 SGL Bit Bucket Descriptor: Not Supported 00:07:37.275 SGL Metadata Pointer: Not Supported 00:07:37.275 Oversized SGL: Not Supported 00:07:37.275 SGL Metadata Address: Not Supported 00:07:37.275 SGL Offset: Not Supported 00:07:37.275 Transport SGL Data Block: Not Supported 00:07:37.275 Replay Protected Memory Block: Not Supported 00:07:37.275 00:07:37.275 Firmware Slot Information 00:07:37.275 ========================= 00:07:37.275 Active slot: 1 00:07:37.275 Slot 1 Firmware Revision: 1.0 00:07:37.275 00:07:37.275 00:07:37.275 Commands Supported and Effects 00:07:37.275 ============================== 00:07:37.275 Admin Commands 00:07:37.275 -------------- 00:07:37.275 Delete I/O Submission Queue (00h): Supported 00:07:37.275 Create I/O Submission Queue (01h): Supported 00:07:37.275 Get Log Page (02h): Supported 00:07:37.275 Delete I/O Completion Queue (04h): Supported 00:07:37.275 Create I/O Completion Queue (05h): Supported 00:07:37.275 Identify (06h): Supported 00:07:37.275 Abort (08h): Supported 00:07:37.275 Set Features (09h): Supported 00:07:37.275 Get Features (0Ah): Supported 00:07:37.275 Asynchronous Event Request (0Ch): Supported 00:07:37.275 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.275 Directive Send (19h): Supported 00:07:37.275 Directive Receive (1Ah): Supported 00:07:37.275 Virtualization Management (1Ch): Supported 00:07:37.275 Doorbell Buffer Config (7Ch): Supported 00:07:37.275 Format NVM (80h): Supported LBA-Change 00:07:37.275 I/O Commands 00:07:37.275 ------------ 00:07:37.275 Flush (00h): Supported LBA-Change 00:07:37.275 Write (01h): Supported LBA-Change 00:07:37.275 Read (02h): Supported 00:07:37.275 Compare (05h): Supported 00:07:37.275 Write Zeroes (08h): Supported LBA-Change 00:07:37.275 Dataset Management (09h): Supported LBA-Change 00:07:37.275 Unknown (0Ch): Supported 00:07:37.275 Unknown (12h): Supported 00:07:37.275 Copy (19h): Supported LBA-Change 00:07:37.275 Unknown (1Dh): Supported LBA-Change 00:07:37.275 00:07:37.275 Error Log 00:07:37.275 ========= 00:07:37.275 00:07:37.275 Arbitration 00:07:37.275 =========== 00:07:37.275 Arbitration Burst: no limit 00:07:37.275 00:07:37.275 Power Management 00:07:37.275 ================ 00:07:37.275 Number of Power States: 1 00:07:37.275 Current Power State: Power State #0 00:07:37.275 Power State #0: 00:07:37.275 Max Power: 25.00 W 00:07:37.275 Non-Operational State: Operational 00:07:37.275 Entry Latency: 16 microseconds 00:07:37.275 Exit Latency: 4 microseconds 00:07:37.275 Relative Read Throughput: 0 00:07:37.275 Relative Read Latency: 0 00:07:37.275 Relative Write Throughput: 0 00:07:37.275 Relative Write Latency: 0 00:07:37.275 Idle Power: Not Reported 00:07:37.275 Active Power: Not Reported 00:07:37.275 Non-Operational Permissive Mode: Not Supported 00:07:37.275 00:07:37.275 Health Information 00:07:37.275 ================== 00:07:37.275 Critical Warnings: 00:07:37.275 Available Spare Space: OK 00:07:37.275 Temperature: OK 00:07:37.275 Device Reliability: OK 00:07:37.275 Read Only: No 00:07:37.275 Volatile Memory Backup: OK 00:07:37.275 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.275 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.275 Available Spare: 0% 00:07:37.275 Available Spare Threshold: 0% 00:07:37.275 Life Percentage Used: 0% 00:07:37.275 Data Units Read: 733 00:07:37.275 Data Units Written: 661 00:07:37.275 Host Read Commands: 41324 00:07:37.275 Host Write Commands: 41110 00:07:37.275 Controller Busy Time: 0 minutes 00:07:37.275 Power Cycles: 0 00:07:37.275 Power On Hours: 0 hours 00:07:37.275 Unsafe Shutdowns: 0 00:07:37.275 Unrecoverable Media Errors: 0 00:07:37.275 Lifetime Error Log Entries: 0 00:07:37.275 Warning Temperature Time: 0 minutes 00:07:37.275 Critical Temperature Time: 0 minutes 00:07:37.275 00:07:37.275 Number of Queues 00:07:37.275 ================ 00:07:37.275 Number of I/O Submission Queues: 64 00:07:37.275 Number of I/O Completion Queues: 64 00:07:37.275 00:07:37.275 ZNS Specific Controller Data 00:07:37.275 ============================ 00:07:37.275 Zone Append Size Limit: 0 00:07:37.275 00:07:37.275 00:07:37.275 Active Namespaces 00:07:37.275 ================= 00:07:37.275 Namespace ID:1 00:07:37.275 Error Recovery Timeout: Unlimited 00:07:37.275 Command Set Identifier: NVM (00h) 00:07:37.275 Deallocate: Supported 00:07:37.275 Deallocated/Unwritten Error: Supported 00:07:37.275 Deallocated Read Value: All 0x00 00:07:37.275 Deallocate in Write Zeroes: Not Supported 00:07:37.275 Deallocated Guard Field: 0xFFFF 00:07:37.275 Flush: Supported 00:07:37.275 Reservation: Not Supported 00:07:37.275 Metadata Transferred as: Separate Metadata Buffer 00:07:37.275 Namespace Sharing Capabilities: Private 00:07:37.275 Size (in LBAs): 1548666 (5GiB) 00:07:37.275 Capacity (in LBAs): 1548666 (5GiB) 00:07:37.275 Utilization (in LBAs): 1548666 (5GiB) 00:07:37.275 Thin Provisioning: Not Supported 00:07:37.275 Per-NS Atomic Units: No 00:07:37.275 Maximum Single Source Range Length: 128 00:07:37.275 Maximum Copy Length: 128 00:07:37.275 Maximum Source Range Count: 128 00:07:37.275 NGUID/EUI64 Never Reused: No 00:07:37.275 Namespace Write Protected: No 00:07:37.275 Number of LBA Formats: 8 00:07:37.275 Current LBA Format: LBA Format #07 00:07:37.275 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.275 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.275 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.275 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.275 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.275 LBA Form[2024-11-18 11:50:34.831257] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62768 terminated unexpected 00:07:37.275 at #05: Data Size: 4096 Metadata Size: 8 00:07:37.275 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.275 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.275 00:07:37.275 NVM Specific Namespace Data 00:07:37.275 =========================== 00:07:37.275 Logical Block Storage Tag Mask: 0 00:07:37.275 Protection Information Capabilities: 00:07:37.275 16b Guard Protection Information Storage Tag Support: No 00:07:37.275 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.275 Storage Tag Check Read Support: No 00:07:37.275 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.275 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.275 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.275 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.275 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.275 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.275 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.275 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.275 ===================================================== 00:07:37.275 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:37.275 ===================================================== 00:07:37.275 Controller Capabilities/Features 00:07:37.275 ================================ 00:07:37.275 Vendor ID: 1b36 00:07:37.275 Subsystem Vendor ID: 1af4 00:07:37.275 Serial Number: 12342 00:07:37.275 Model Number: QEMU NVMe Ctrl 00:07:37.275 Firmware Version: 8.0.0 00:07:37.275 Recommended Arb Burst: 6 00:07:37.275 IEEE OUI Identifier: 00 54 52 00:07:37.275 Multi-path I/O 00:07:37.275 May have multiple subsystem ports: No 00:07:37.275 May have multiple controllers: No 00:07:37.275 Associated with SR-IOV VF: No 00:07:37.275 Max Data Transfer Size: 524288 00:07:37.275 Max Number of Namespaces: 256 00:07:37.276 Max Number of I/O Queues: 64 00:07:37.276 NVMe Specification Version (VS): 1.4 00:07:37.276 NVMe Specification Version (Identify): 1.4 00:07:37.276 Maximum Queue Entries: 2048 00:07:37.276 Contiguous Queues Required: Yes 00:07:37.276 Arbitration Mechanisms Supported 00:07:37.276 Weighted Round Robin: Not Supported 00:07:37.276 Vendor Specific: Not Supported 00:07:37.276 Reset Timeout: 7500 ms 00:07:37.276 Doorbell Stride: 4 bytes 00:07:37.276 NVM Subsystem Reset: Not Supported 00:07:37.276 Command Sets Supported 00:07:37.276 NVM Command Set: Supported 00:07:37.276 Boot Partition: Not Supported 00:07:37.276 Memory Page Size Minimum: 4096 bytes 00:07:37.276 Memory Page Size Maximum: 65536 bytes 00:07:37.276 Persistent Memory Region: Not Supported 00:07:37.276 Optional Asynchronous Events Supported 00:07:37.276 Namespace Attribute Notices: Supported 00:07:37.276 Firmware Activation Notices: Not Supported 00:07:37.276 ANA Change Notices: Not Supported 00:07:37.276 PLE Aggregate Log Change Notices: Not Supported 00:07:37.276 LBA Status Info Alert Notices: Not Supported 00:07:37.276 EGE Aggregate Log Change Notices: Not Supported 00:07:37.276 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.276 Zone Descriptor Change Notices: Not Supported 00:07:37.276 Discovery Log Change Notices: Not Supported 00:07:37.276 Controller Attributes 00:07:37.276 128-bit Host Identifier: Not Supported 00:07:37.276 Non-Operational Permissive Mode: Not Supported 00:07:37.276 NVM Sets: Not Supported 00:07:37.276 Read Recovery Levels: Not Supported 00:07:37.276 Endurance Groups: Not Supported 00:07:37.276 Predictable Latency Mode: Not Supported 00:07:37.276 Traffic Based Keep ALive: Not Supported 00:07:37.276 Namespace Granularity: Not Supported 00:07:37.276 SQ Associations: Not Supported 00:07:37.276 UUID List: Not Supported 00:07:37.276 Multi-Domain Subsystem: Not Supported 00:07:37.276 Fixed Capacity Management: Not Supported 00:07:37.276 Variable Capacity Management: Not Supported 00:07:37.276 Delete Endurance Group: Not Supported 00:07:37.276 Delete NVM Set: Not Supported 00:07:37.276 Extended LBA Formats Supported: Supported 00:07:37.276 Flexible Data Placement Supported: Not Supported 00:07:37.276 00:07:37.276 Controller Memory Buffer Support 00:07:37.276 ================================ 00:07:37.276 Supported: No 00:07:37.276 00:07:37.276 Persistent Memory Region Support 00:07:37.276 ================================ 00:07:37.276 Supported: No 00:07:37.276 00:07:37.276 Admin Command Set Attributes 00:07:37.276 ============================ 00:07:37.276 Security Send/Receive: Not Supported 00:07:37.276 Format NVM: Supported 00:07:37.276 Firmware Activate/Download: Not Supported 00:07:37.276 Namespace Management: Supported 00:07:37.276 Device Self-Test: Not Supported 00:07:37.276 Directives: Supported 00:07:37.276 NVMe-MI: Not Supported 00:07:37.276 Virtualization Management: Not Supported 00:07:37.276 Doorbell Buffer Config: Supported 00:07:37.276 Get LBA Status Capability: Not Supported 00:07:37.276 Command & Feature Lockdown Capability: Not Supported 00:07:37.276 Abort Command Limit: 4 00:07:37.276 Async Event Request Limit: 4 00:07:37.276 Number of Firmware Slots: N/A 00:07:37.276 Firmware Slot 1 Read-Only: N/A 00:07:37.276 Firmware Activation Without Reset: N/A 00:07:37.276 Multiple Update Detection Support: N/A 00:07:37.276 Firmware Update Granularity: No Information Provided 00:07:37.276 Per-Namespace SMART Log: Yes 00:07:37.276 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.276 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:37.276 Command Effects Log Page: Supported 00:07:37.276 Get Log Page Extended Data: Supported 00:07:37.276 Telemetry Log Pages: Not Supported 00:07:37.276 Persistent Event Log Pages: Not Supported 00:07:37.276 Supported Log Pages Log Page: May Support 00:07:37.276 Commands Supported & Effects Log Page: Not Supported 00:07:37.276 Feature Identifiers & Effects Log Page:May Support 00:07:37.276 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.276 Data Area 4 for Telemetry Log: Not Supported 00:07:37.276 Error Log Page Entries Supported: 1 00:07:37.276 Keep Alive: Not Supported 00:07:37.276 00:07:37.276 NVM Command Set Attributes 00:07:37.276 ========================== 00:07:37.276 Submission Queue Entry Size 00:07:37.276 Max: 64 00:07:37.276 Min: 64 00:07:37.276 Completion Queue Entry Size 00:07:37.276 Max: 16 00:07:37.276 Min: 16 00:07:37.276 Number of Namespaces: 256 00:07:37.276 Compare Command: Supported 00:07:37.276 Write Uncorrectable Command: Not Supported 00:07:37.276 Dataset Management Command: Supported 00:07:37.276 Write Zeroes Command: Supported 00:07:37.276 Set Features Save Field: Supported 00:07:37.276 Reservations: Not Supported 00:07:37.276 Timestamp: Supported 00:07:37.276 Copy: Supported 00:07:37.276 Volatile Write Cache: Present 00:07:37.276 Atomic Write Unit (Normal): 1 00:07:37.276 Atomic Write Unit (PFail): 1 00:07:37.276 Atomic Compare & Write Unit: 1 00:07:37.276 Fused Compare & Write: Not Supported 00:07:37.276 Scatter-Gather List 00:07:37.276 SGL Command Set: Supported 00:07:37.276 SGL Keyed: Not Supported 00:07:37.276 SGL Bit Bucket Descriptor: Not Supported 00:07:37.276 SGL Metadata Pointer: Not Supported 00:07:37.276 Oversized SGL: Not Supported 00:07:37.276 SGL Metadata Address: Not Supported 00:07:37.276 SGL Offset: Not Supported 00:07:37.276 Transport SGL Data Block: Not Supported 00:07:37.276 Replay Protected Memory Block: Not Supported 00:07:37.276 00:07:37.276 Firmware Slot Information 00:07:37.276 ========================= 00:07:37.276 Active slot: 1 00:07:37.276 Slot 1 Firmware Revision: 1.0 00:07:37.276 00:07:37.276 00:07:37.276 Commands Supported and Effects 00:07:37.276 ============================== 00:07:37.276 Admin Commands 00:07:37.276 -------------- 00:07:37.276 Delete I/O Submission Queue (00h): Supported 00:07:37.276 Create I/O Submission Queue (01h): Supported 00:07:37.276 Get Log Page (02h): Supported 00:07:37.276 Delete I/O Completion Queue (04h): Supported 00:07:37.276 Create I/O Completion Queue (05h): Supported 00:07:37.276 Identify (06h): Supported 00:07:37.276 Abort (08h): Supported 00:07:37.276 Set Features (09h): Supported 00:07:37.276 Get Features (0Ah): Supported 00:07:37.276 Asynchronous Event Request (0Ch): Supported 00:07:37.276 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.276 Directive Send (19h): Supported 00:07:37.276 Directive Receive (1Ah): Supported 00:07:37.276 Virtualization Management (1Ch): Supported 00:07:37.276 Doorbell Buffer Config (7Ch): Supported 00:07:37.276 Format NVM (80h): Supported LBA-Change 00:07:37.276 I/O Commands 00:07:37.276 ------------ 00:07:37.276 Flush (00h): Supported LBA-Change 00:07:37.276 Write (01h): Supported LBA-Change 00:07:37.276 Read (02h): Supported 00:07:37.276 Compare (05h): Supported 00:07:37.276 Write Zeroes (08h): Supported LBA-Change 00:07:37.276 Dataset Management (09h): Supported LBA-Change 00:07:37.276 Unknown (0Ch): Supported 00:07:37.276 Unknown (12h): Supported 00:07:37.276 Copy (19h): Supported LBA-Change 00:07:37.276 Unknown (1Dh): Supported LBA-Change 00:07:37.276 00:07:37.276 Error Log 00:07:37.277 ========= 00:07:37.277 00:07:37.277 Arbitration 00:07:37.277 =========== 00:07:37.277 Arbitration Burst: no limit 00:07:37.277 00:07:37.277 Power Management 00:07:37.277 ================ 00:07:37.277 Number of Power States: 1 00:07:37.277 Current Power State: Power State #0 00:07:37.277 Power State #0: 00:07:37.277 Max Power: 25.00 W 00:07:37.277 Non-Operational State: Operational 00:07:37.277 Entry Latency: 16 microseconds 00:07:37.277 Exit Latency: 4 microseconds 00:07:37.277 Relative Read Throughput: 0 00:07:37.277 Relative Read Latency: 0 00:07:37.277 Relative Write Throughput: 0 00:07:37.277 Relative Write Latency: 0 00:07:37.277 Idle Power: Not Reported 00:07:37.277 Active Power: Not Reported 00:07:37.277 Non-Operational Permissive Mode: Not Supported 00:07:37.277 00:07:37.277 Health Information 00:07:37.277 ================== 00:07:37.277 Critical Warnings: 00:07:37.277 Available Spare Space: OK 00:07:37.277 Temperature: OK 00:07:37.277 Device Reliability: OK 00:07:37.277 Read Only: No 00:07:37.277 Volatile Memory Backup: OK 00:07:37.277 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.277 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.277 Available Spare: 0% 00:07:37.277 Available Spare Threshold: 0% 00:07:37.277 Life Percentage Used: 0% 00:07:37.277 Data Units Read: 2465 00:07:37.277 Data Units Written: 2252 00:07:37.277 Host Read Commands: 127497 00:07:37.277 Host Write Commands: 125767 00:07:37.277 Controller Busy Time: 0 minutes 00:07:37.277 Power Cycles: 0 00:07:37.277 Power On Hours: 0 hours 00:07:37.277 Unsafe Shutdowns: 0 00:07:37.277 Unrecoverable Media Errors: 0 00:07:37.277 Lifetime Error Log Entries: 0 00:07:37.277 Warning Temperature Time: 0 minutes 00:07:37.277 Critical Temperature Time: 0 minutes 00:07:37.277 00:07:37.277 Number of Queues 00:07:37.277 ================ 00:07:37.277 Number of I/O Submission Queues: 64 00:07:37.277 Number of I/O Completion Queues: 64 00:07:37.277 00:07:37.277 ZNS Specific Controller Data 00:07:37.277 ============================ 00:07:37.277 Zone Append Size Limit: 0 00:07:37.277 00:07:37.277 00:07:37.277 Active Namespaces 00:07:37.277 ================= 00:07:37.277 Namespace ID:1 00:07:37.277 Error Recovery Timeout: Unlimited 00:07:37.277 Command Set Identifier: NVM (00h) 00:07:37.277 Deallocate: Supported 00:07:37.277 Deallocated/Unwritten Error: Supported 00:07:37.277 Deallocated Read Value: All 0x00 00:07:37.277 Deallocate in Write Zeroes: Not Supported 00:07:37.277 Deallocated Guard Field: 0xFFFF 00:07:37.277 Flush: Supported 00:07:37.277 Reservation: Not Supported 00:07:37.277 Namespace Sharing Capabilities: Private 00:07:37.277 Size (in LBAs): 1048576 (4GiB) 00:07:37.277 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.277 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.277 Thin Provisioning: Not Supported 00:07:37.277 Per-NS Atomic Units: No 00:07:37.277 Maximum Single Source Range Length: 128 00:07:37.277 Maximum Copy Length: 128 00:07:37.277 Maximum Source Range Count: 128 00:07:37.277 NGUID/EUI64 Never Reused: No 00:07:37.277 Namespace Write Protected: No 00:07:37.277 Number of LBA Formats: 8 00:07:37.277 Current LBA Format: LBA Format #04 00:07:37.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.277 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.277 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.277 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.277 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.277 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.277 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.277 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.277 00:07:37.277 NVM Specific Namespace Data 00:07:37.277 =========================== 00:07:37.277 Logical Block Storage Tag Mask: 0 00:07:37.277 Protection Information Capabilities: 00:07:37.277 16b Guard Protection Information Storage Tag Support: No 00:07:37.277 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.277 Storage Tag Check Read Support: No 00:07:37.277 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Namespace ID:2 00:07:37.277 Error Recovery Timeout: Unlimited 00:07:37.277 Command Set Identifier: NVM (00h) 00:07:37.277 Deallocate: Supported 00:07:37.277 Deallocated/Unwritten Error: Supported 00:07:37.277 Deallocated Read Value: All 0x00 00:07:37.277 Deallocate in Write Zeroes: Not Supported 00:07:37.277 Deallocated Guard Field: 0xFFFF 00:07:37.277 Flush: Supported 00:07:37.277 Reservation: Not Supported 00:07:37.277 Namespace Sharing Capabilities: Private 00:07:37.277 Size (in LBAs): 1048576 (4GiB) 00:07:37.277 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.277 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.277 Thin Provisioning: Not Supported 00:07:37.277 Per-NS Atomic Units: No 00:07:37.277 Maximum Single Source Range Length: 128 00:07:37.277 Maximum Copy Length: 128 00:07:37.277 Maximum Source Range Count: 128 00:07:37.277 NGUID/EUI64 Never Reused: No 00:07:37.277 Namespace Write Protected: No 00:07:37.277 Number of LBA Formats: 8 00:07:37.277 Current LBA Format: LBA Format #04 00:07:37.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.277 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.277 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.277 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.277 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.277 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.277 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.277 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.277 00:07:37.277 NVM Specific Namespace Data 00:07:37.277 =========================== 00:07:37.277 Logical Block Storage Tag Mask: 0 00:07:37.277 Protection Information Capabilities: 00:07:37.277 16b Guard Protection Information Storage Tag Support: No 00:07:37.277 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.277 Storage Tag Check Read Support: No 00:07:37.277 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.277 Namespace ID:3 00:07:37.277 Error Recovery Timeout: Unlimited 00:07:37.277 Command Set Identifier: NVM (00h) 00:07:37.277 Deallocate: Supported 00:07:37.277 Deallocated/Unwritten Error: Supported 00:07:37.277 Deallocated Read Value: All 0x00 00:07:37.277 Deallocate in Write Zeroes: Not Supported 00:07:37.277 Deallocated Guard Field: 0xFFFF 00:07:37.277 Flush: Supported 00:07:37.277 Reservation: Not Supported 00:07:37.277 Namespace Sharing Capabilities: Private 00:07:37.277 Size (in LBAs): 1048576 (4GiB) 00:07:37.277 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.277 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.277 Thin Provisioning: Not Supported 00:07:37.277 Per-NS Atomic Units: No 00:07:37.277 Maximum Single Source Range Length: 128 00:07:37.277 Maximum Copy Length: 128 00:07:37.277 Maximum Source Range Count: 128 00:07:37.277 NGUID/EUI64 Never Reused: No 00:07:37.277 Namespace Write Protected: No 00:07:37.277 Number of LBA Formats: 8 00:07:37.277 Current LBA Format: LBA Format #04 00:07:37.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.277 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.277 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.277 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.277 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.277 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.277 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.278 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.278 00:07:37.278 NVM Specific Namespace Data 00:07:37.278 =========================== 00:07:37.278 Logical Block Storage Tag Mask: 0 00:07:37.278 Protection Information Capabilities: 00:07:37.278 16b Guard Protection Information Storage Tag Support: No 00:07:37.278 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.278 Storage Tag Check Read Support: No 00:07:37.278 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.278 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.278 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.278 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.278 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.278 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.278 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.278 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.278 11:50:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.278 11:50:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:37.539 ===================================================== 00:07:37.539 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:37.539 ===================================================== 00:07:37.539 Controller Capabilities/Features 00:07:37.539 ================================ 00:07:37.539 Vendor ID: 1b36 00:07:37.539 Subsystem Vendor ID: 1af4 00:07:37.539 Serial Number: 12340 00:07:37.539 Model Number: QEMU NVMe Ctrl 00:07:37.539 Firmware Version: 8.0.0 00:07:37.539 Recommended Arb Burst: 6 00:07:37.539 IEEE OUI Identifier: 00 54 52 00:07:37.539 Multi-path I/O 00:07:37.539 May have multiple subsystem ports: No 00:07:37.539 May have multiple controllers: No 00:07:37.539 Associated with SR-IOV VF: No 00:07:37.539 Max Data Transfer Size: 524288 00:07:37.539 Max Number of Namespaces: 256 00:07:37.539 Max Number of I/O Queues: 64 00:07:37.539 NVMe Specification Version (VS): 1.4 00:07:37.539 NVMe Specification Version (Identify): 1.4 00:07:37.539 Maximum Queue Entries: 2048 00:07:37.539 Contiguous Queues Required: Yes 00:07:37.539 Arbitration Mechanisms Supported 00:07:37.539 Weighted Round Robin: Not Supported 00:07:37.539 Vendor Specific: Not Supported 00:07:37.539 Reset Timeout: 7500 ms 00:07:37.539 Doorbell Stride: 4 bytes 00:07:37.539 NVM Subsystem Reset: Not Supported 00:07:37.539 Command Sets Supported 00:07:37.539 NVM Command Set: Supported 00:07:37.539 Boot Partition: Not Supported 00:07:37.539 Memory Page Size Minimum: 4096 bytes 00:07:37.539 Memory Page Size Maximum: 65536 bytes 00:07:37.539 Persistent Memory Region: Not Supported 00:07:37.539 Optional Asynchronous Events Supported 00:07:37.539 Namespace Attribute Notices: Supported 00:07:37.539 Firmware Activation Notices: Not Supported 00:07:37.539 ANA Change Notices: Not Supported 00:07:37.539 PLE Aggregate Log Change Notices: Not Supported 00:07:37.539 LBA Status Info Alert Notices: Not Supported 00:07:37.539 EGE Aggregate Log Change Notices: Not Supported 00:07:37.539 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.539 Zone Descriptor Change Notices: Not Supported 00:07:37.539 Discovery Log Change Notices: Not Supported 00:07:37.539 Controller Attributes 00:07:37.539 128-bit Host Identifier: Not Supported 00:07:37.539 Non-Operational Permissive Mode: Not Supported 00:07:37.539 NVM Sets: Not Supported 00:07:37.539 Read Recovery Levels: Not Supported 00:07:37.539 Endurance Groups: Not Supported 00:07:37.539 Predictable Latency Mode: Not Supported 00:07:37.539 Traffic Based Keep ALive: Not Supported 00:07:37.539 Namespace Granularity: Not Supported 00:07:37.539 SQ Associations: Not Supported 00:07:37.539 UUID List: Not Supported 00:07:37.539 Multi-Domain Subsystem: Not Supported 00:07:37.539 Fixed Capacity Management: Not Supported 00:07:37.539 Variable Capacity Management: Not Supported 00:07:37.539 Delete Endurance Group: Not Supported 00:07:37.539 Delete NVM Set: Not Supported 00:07:37.539 Extended LBA Formats Supported: Supported 00:07:37.539 Flexible Data Placement Supported: Not Supported 00:07:37.539 00:07:37.539 Controller Memory Buffer Support 00:07:37.539 ================================ 00:07:37.539 Supported: No 00:07:37.539 00:07:37.539 Persistent Memory Region Support 00:07:37.539 ================================ 00:07:37.539 Supported: No 00:07:37.539 00:07:37.539 Admin Command Set Attributes 00:07:37.539 ============================ 00:07:37.539 Security Send/Receive: Not Supported 00:07:37.539 Format NVM: Supported 00:07:37.539 Firmware Activate/Download: Not Supported 00:07:37.539 Namespace Management: Supported 00:07:37.539 Device Self-Test: Not Supported 00:07:37.539 Directives: Supported 00:07:37.539 NVMe-MI: Not Supported 00:07:37.539 Virtualization Management: Not Supported 00:07:37.539 Doorbell Buffer Config: Supported 00:07:37.539 Get LBA Status Capability: Not Supported 00:07:37.539 Command & Feature Lockdown Capability: Not Supported 00:07:37.539 Abort Command Limit: 4 00:07:37.539 Async Event Request Limit: 4 00:07:37.539 Number of Firmware Slots: N/A 00:07:37.539 Firmware Slot 1 Read-Only: N/A 00:07:37.539 Firmware Activation Without Reset: N/A 00:07:37.539 Multiple Update Detection Support: N/A 00:07:37.539 Firmware Update Granularity: No Information Provided 00:07:37.539 Per-Namespace SMART Log: Yes 00:07:37.539 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.539 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:37.539 Command Effects Log Page: Supported 00:07:37.539 Get Log Page Extended Data: Supported 00:07:37.539 Telemetry Log Pages: Not Supported 00:07:37.539 Persistent Event Log Pages: Not Supported 00:07:37.539 Supported Log Pages Log Page: May Support 00:07:37.539 Commands Supported & Effects Log Page: Not Supported 00:07:37.539 Feature Identifiers & Effects Log Page:May Support 00:07:37.539 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.539 Data Area 4 for Telemetry Log: Not Supported 00:07:37.539 Error Log Page Entries Supported: 1 00:07:37.539 Keep Alive: Not Supported 00:07:37.539 00:07:37.539 NVM Command Set Attributes 00:07:37.539 ========================== 00:07:37.539 Submission Queue Entry Size 00:07:37.539 Max: 64 00:07:37.539 Min: 64 00:07:37.539 Completion Queue Entry Size 00:07:37.539 Max: 16 00:07:37.539 Min: 16 00:07:37.539 Number of Namespaces: 256 00:07:37.539 Compare Command: Supported 00:07:37.539 Write Uncorrectable Command: Not Supported 00:07:37.539 Dataset Management Command: Supported 00:07:37.539 Write Zeroes Command: Supported 00:07:37.539 Set Features Save Field: Supported 00:07:37.539 Reservations: Not Supported 00:07:37.539 Timestamp: Supported 00:07:37.539 Copy: Supported 00:07:37.539 Volatile Write Cache: Present 00:07:37.539 Atomic Write Unit (Normal): 1 00:07:37.539 Atomic Write Unit (PFail): 1 00:07:37.539 Atomic Compare & Write Unit: 1 00:07:37.539 Fused Compare & Write: Not Supported 00:07:37.539 Scatter-Gather List 00:07:37.539 SGL Command Set: Supported 00:07:37.539 SGL Keyed: Not Supported 00:07:37.539 SGL Bit Bucket Descriptor: Not Supported 00:07:37.539 SGL Metadata Pointer: Not Supported 00:07:37.539 Oversized SGL: Not Supported 00:07:37.539 SGL Metadata Address: Not Supported 00:07:37.539 SGL Offset: Not Supported 00:07:37.539 Transport SGL Data Block: Not Supported 00:07:37.539 Replay Protected Memory Block: Not Supported 00:07:37.539 00:07:37.539 Firmware Slot Information 00:07:37.539 ========================= 00:07:37.539 Active slot: 1 00:07:37.539 Slot 1 Firmware Revision: 1.0 00:07:37.539 00:07:37.539 00:07:37.539 Commands Supported and Effects 00:07:37.539 ============================== 00:07:37.539 Admin Commands 00:07:37.539 -------------- 00:07:37.539 Delete I/O Submission Queue (00h): Supported 00:07:37.539 Create I/O Submission Queue (01h): Supported 00:07:37.539 Get Log Page (02h): Supported 00:07:37.539 Delete I/O Completion Queue (04h): Supported 00:07:37.539 Create I/O Completion Queue (05h): Supported 00:07:37.539 Identify (06h): Supported 00:07:37.539 Abort (08h): Supported 00:07:37.539 Set Features (09h): Supported 00:07:37.539 Get Features (0Ah): Supported 00:07:37.539 Asynchronous Event Request (0Ch): Supported 00:07:37.540 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.540 Directive Send (19h): Supported 00:07:37.540 Directive Receive (1Ah): Supported 00:07:37.540 Virtualization Management (1Ch): Supported 00:07:37.540 Doorbell Buffer Config (7Ch): Supported 00:07:37.540 Format NVM (80h): Supported LBA-Change 00:07:37.540 I/O Commands 00:07:37.540 ------------ 00:07:37.540 Flush (00h): Supported LBA-Change 00:07:37.540 Write (01h): Supported LBA-Change 00:07:37.540 Read (02h): Supported 00:07:37.540 Compare (05h): Supported 00:07:37.540 Write Zeroes (08h): Supported LBA-Change 00:07:37.540 Dataset Management (09h): Supported LBA-Change 00:07:37.540 Unknown (0Ch): Supported 00:07:37.540 Unknown (12h): Supported 00:07:37.540 Copy (19h): Supported LBA-Change 00:07:37.540 Unknown (1Dh): Supported LBA-Change 00:07:37.540 00:07:37.540 Error Log 00:07:37.540 ========= 00:07:37.540 00:07:37.540 Arbitration 00:07:37.540 =========== 00:07:37.540 Arbitration Burst: no limit 00:07:37.540 00:07:37.540 Power Management 00:07:37.540 ================ 00:07:37.540 Number of Power States: 1 00:07:37.540 Current Power State: Power State #0 00:07:37.540 Power State #0: 00:07:37.540 Max Power: 25.00 W 00:07:37.540 Non-Operational State: Operational 00:07:37.540 Entry Latency: 16 microseconds 00:07:37.540 Exit Latency: 4 microseconds 00:07:37.540 Relative Read Throughput: 0 00:07:37.540 Relative Read Latency: 0 00:07:37.540 Relative Write Throughput: 0 00:07:37.540 Relative Write Latency: 0 00:07:37.540 Idle Power: Not Reported 00:07:37.540 Active Power: Not Reported 00:07:37.540 Non-Operational Permissive Mode: Not Supported 00:07:37.540 00:07:37.540 Health Information 00:07:37.540 ================== 00:07:37.540 Critical Warnings: 00:07:37.540 Available Spare Space: OK 00:07:37.540 Temperature: OK 00:07:37.540 Device Reliability: OK 00:07:37.540 Read Only: No 00:07:37.540 Volatile Memory Backup: OK 00:07:37.540 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.540 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.540 Available Spare: 0% 00:07:37.540 Available Spare Threshold: 0% 00:07:37.540 Life Percentage Used: 0% 00:07:37.540 Data Units Read: 733 00:07:37.540 Data Units Written: 661 00:07:37.540 Host Read Commands: 41324 00:07:37.540 Host Write Commands: 41110 00:07:37.540 Controller Busy Time: 0 minutes 00:07:37.540 Power Cycles: 0 00:07:37.540 Power On Hours: 0 hours 00:07:37.540 Unsafe Shutdowns: 0 00:07:37.540 Unrecoverable Media Errors: 0 00:07:37.540 Lifetime Error Log Entries: 0 00:07:37.540 Warning Temperature Time: 0 minutes 00:07:37.540 Critical Temperature Time: 0 minutes 00:07:37.540 00:07:37.540 Number of Queues 00:07:37.540 ================ 00:07:37.540 Number of I/O Submission Queues: 64 00:07:37.540 Number of I/O Completion Queues: 64 00:07:37.540 00:07:37.540 ZNS Specific Controller Data 00:07:37.540 ============================ 00:07:37.540 Zone Append Size Limit: 0 00:07:37.540 00:07:37.540 00:07:37.540 Active Namespaces 00:07:37.540 ================= 00:07:37.540 Namespace ID:1 00:07:37.540 Error Recovery Timeout: Unlimited 00:07:37.540 Command Set Identifier: NVM (00h) 00:07:37.540 Deallocate: Supported 00:07:37.540 Deallocated/Unwritten Error: Supported 00:07:37.540 Deallocated Read Value: All 0x00 00:07:37.540 Deallocate in Write Zeroes: Not Supported 00:07:37.540 Deallocated Guard Field: 0xFFFF 00:07:37.540 Flush: Supported 00:07:37.540 Reservation: Not Supported 00:07:37.540 Metadata Transferred as: Separate Metadata Buffer 00:07:37.540 Namespace Sharing Capabilities: Private 00:07:37.540 Size (in LBAs): 1548666 (5GiB) 00:07:37.540 Capacity (in LBAs): 1548666 (5GiB) 00:07:37.540 Utilization (in LBAs): 1548666 (5GiB) 00:07:37.540 Thin Provisioning: Not Supported 00:07:37.540 Per-NS Atomic Units: No 00:07:37.540 Maximum Single Source Range Length: 128 00:07:37.540 Maximum Copy Length: 128 00:07:37.540 Maximum Source Range Count: 128 00:07:37.540 NGUID/EUI64 Never Reused: No 00:07:37.540 Namespace Write Protected: No 00:07:37.540 Number of LBA Formats: 8 00:07:37.540 Current LBA Format: LBA Format #07 00:07:37.540 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.540 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.540 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.540 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.540 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.540 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.540 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.540 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.540 00:07:37.540 NVM Specific Namespace Data 00:07:37.540 =========================== 00:07:37.540 Logical Block Storage Tag Mask: 0 00:07:37.540 Protection Information Capabilities: 00:07:37.540 16b Guard Protection Information Storage Tag Support: No 00:07:37.540 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.540 Storage Tag Check Read Support: No 00:07:37.540 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.540 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.540 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.540 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.540 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.540 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.540 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.540 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.540 11:50:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.540 11:50:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:37.800 ===================================================== 00:07:37.800 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:37.800 ===================================================== 00:07:37.800 Controller Capabilities/Features 00:07:37.800 ================================ 00:07:37.800 Vendor ID: 1b36 00:07:37.800 Subsystem Vendor ID: 1af4 00:07:37.800 Serial Number: 12341 00:07:37.800 Model Number: QEMU NVMe Ctrl 00:07:37.800 Firmware Version: 8.0.0 00:07:37.800 Recommended Arb Burst: 6 00:07:37.800 IEEE OUI Identifier: 00 54 52 00:07:37.800 Multi-path I/O 00:07:37.800 May have multiple subsystem ports: No 00:07:37.800 May have multiple controllers: No 00:07:37.800 Associated with SR-IOV VF: No 00:07:37.800 Max Data Transfer Size: 524288 00:07:37.800 Max Number of Namespaces: 256 00:07:37.800 Max Number of I/O Queues: 64 00:07:37.800 NVMe Specification Version (VS): 1.4 00:07:37.800 NVMe Specification Version (Identify): 1.4 00:07:37.800 Maximum Queue Entries: 2048 00:07:37.800 Contiguous Queues Required: Yes 00:07:37.800 Arbitration Mechanisms Supported 00:07:37.800 Weighted Round Robin: Not Supported 00:07:37.800 Vendor Specific: Not Supported 00:07:37.800 Reset Timeout: 7500 ms 00:07:37.800 Doorbell Stride: 4 bytes 00:07:37.800 NVM Subsystem Reset: Not Supported 00:07:37.800 Command Sets Supported 00:07:37.800 NVM Command Set: Supported 00:07:37.800 Boot Partition: Not Supported 00:07:37.800 Memory Page Size Minimum: 4096 bytes 00:07:37.800 Memory Page Size Maximum: 65536 bytes 00:07:37.800 Persistent Memory Region: Not Supported 00:07:37.800 Optional Asynchronous Events Supported 00:07:37.800 Namespace Attribute Notices: Supported 00:07:37.800 Firmware Activation Notices: Not Supported 00:07:37.800 ANA Change Notices: Not Supported 00:07:37.800 PLE Aggregate Log Change Notices: Not Supported 00:07:37.800 LBA Status Info Alert Notices: Not Supported 00:07:37.800 EGE Aggregate Log Change Notices: Not Supported 00:07:37.800 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.800 Zone Descriptor Change Notices: Not Supported 00:07:37.800 Discovery Log Change Notices: Not Supported 00:07:37.800 Controller Attributes 00:07:37.800 128-bit Host Identifier: Not Supported 00:07:37.800 Non-Operational Permissive Mode: Not Supported 00:07:37.800 NVM Sets: Not Supported 00:07:37.800 Read Recovery Levels: Not Supported 00:07:37.800 Endurance Groups: Not Supported 00:07:37.800 Predictable Latency Mode: Not Supported 00:07:37.800 Traffic Based Keep ALive: Not Supported 00:07:37.800 Namespace Granularity: Not Supported 00:07:37.800 SQ Associations: Not Supported 00:07:37.800 UUID List: Not Supported 00:07:37.800 Multi-Domain Subsystem: Not Supported 00:07:37.800 Fixed Capacity Management: Not Supported 00:07:37.800 Variable Capacity Management: Not Supported 00:07:37.800 Delete Endurance Group: Not Supported 00:07:37.800 Delete NVM Set: Not Supported 00:07:37.800 Extended LBA Formats Supported: Supported 00:07:37.800 Flexible Data Placement Supported: Not Supported 00:07:37.800 00:07:37.800 Controller Memory Buffer Support 00:07:37.800 ================================ 00:07:37.800 Supported: No 00:07:37.800 00:07:37.800 Persistent Memory Region Support 00:07:37.800 ================================ 00:07:37.800 Supported: No 00:07:37.800 00:07:37.800 Admin Command Set Attributes 00:07:37.800 ============================ 00:07:37.800 Security Send/Receive: Not Supported 00:07:37.800 Format NVM: Supported 00:07:37.800 Firmware Activate/Download: Not Supported 00:07:37.800 Namespace Management: Supported 00:07:37.800 Device Self-Test: Not Supported 00:07:37.800 Directives: Supported 00:07:37.800 NVMe-MI: Not Supported 00:07:37.800 Virtualization Management: Not Supported 00:07:37.800 Doorbell Buffer Config: Supported 00:07:37.800 Get LBA Status Capability: Not Supported 00:07:37.800 Command & Feature Lockdown Capability: Not Supported 00:07:37.800 Abort Command Limit: 4 00:07:37.800 Async Event Request Limit: 4 00:07:37.800 Number of Firmware Slots: N/A 00:07:37.800 Firmware Slot 1 Read-Only: N/A 00:07:37.800 Firmware Activation Without Reset: N/A 00:07:37.800 Multiple Update Detection Support: N/A 00:07:37.800 Firmware Update Granularity: No Information Provided 00:07:37.800 Per-Namespace SMART Log: Yes 00:07:37.800 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.800 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:37.800 Command Effects Log Page: Supported 00:07:37.800 Get Log Page Extended Data: Supported 00:07:37.800 Telemetry Log Pages: Not Supported 00:07:37.800 Persistent Event Log Pages: Not Supported 00:07:37.800 Supported Log Pages Log Page: May Support 00:07:37.800 Commands Supported & Effects Log Page: Not Supported 00:07:37.800 Feature Identifiers & Effects Log Page:May Support 00:07:37.800 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.800 Data Area 4 for Telemetry Log: Not Supported 00:07:37.800 Error Log Page Entries Supported: 1 00:07:37.800 Keep Alive: Not Supported 00:07:37.800 00:07:37.800 NVM Command Set Attributes 00:07:37.800 ========================== 00:07:37.800 Submission Queue Entry Size 00:07:37.800 Max: 64 00:07:37.800 Min: 64 00:07:37.800 Completion Queue Entry Size 00:07:37.800 Max: 16 00:07:37.800 Min: 16 00:07:37.800 Number of Namespaces: 256 00:07:37.800 Compare Command: Supported 00:07:37.800 Write Uncorrectable Command: Not Supported 00:07:37.800 Dataset Management Command: Supported 00:07:37.800 Write Zeroes Command: Supported 00:07:37.800 Set Features Save Field: Supported 00:07:37.800 Reservations: Not Supported 00:07:37.800 Timestamp: Supported 00:07:37.800 Copy: Supported 00:07:37.800 Volatile Write Cache: Present 00:07:37.800 Atomic Write Unit (Normal): 1 00:07:37.800 Atomic Write Unit (PFail): 1 00:07:37.800 Atomic Compare & Write Unit: 1 00:07:37.800 Fused Compare & Write: Not Supported 00:07:37.800 Scatter-Gather List 00:07:37.800 SGL Command Set: Supported 00:07:37.800 SGL Keyed: Not Supported 00:07:37.800 SGL Bit Bucket Descriptor: Not Supported 00:07:37.800 SGL Metadata Pointer: Not Supported 00:07:37.800 Oversized SGL: Not Supported 00:07:37.800 SGL Metadata Address: Not Supported 00:07:37.800 SGL Offset: Not Supported 00:07:37.800 Transport SGL Data Block: Not Supported 00:07:37.800 Replay Protected Memory Block: Not Supported 00:07:37.800 00:07:37.800 Firmware Slot Information 00:07:37.800 ========================= 00:07:37.800 Active slot: 1 00:07:37.800 Slot 1 Firmware Revision: 1.0 00:07:37.800 00:07:37.800 00:07:37.800 Commands Supported and Effects 00:07:37.800 ============================== 00:07:37.800 Admin Commands 00:07:37.800 -------------- 00:07:37.800 Delete I/O Submission Queue (00h): Supported 00:07:37.800 Create I/O Submission Queue (01h): Supported 00:07:37.800 Get Log Page (02h): Supported 00:07:37.800 Delete I/O Completion Queue (04h): Supported 00:07:37.800 Create I/O Completion Queue (05h): Supported 00:07:37.800 Identify (06h): Supported 00:07:37.800 Abort (08h): Supported 00:07:37.800 Set Features (09h): Supported 00:07:37.800 Get Features (0Ah): Supported 00:07:37.800 Asynchronous Event Request (0Ch): Supported 00:07:37.801 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.801 Directive Send (19h): Supported 00:07:37.801 Directive Receive (1Ah): Supported 00:07:37.801 Virtualization Management (1Ch): Supported 00:07:37.801 Doorbell Buffer Config (7Ch): Supported 00:07:37.801 Format NVM (80h): Supported LBA-Change 00:07:37.801 I/O Commands 00:07:37.801 ------------ 00:07:37.801 Flush (00h): Supported LBA-Change 00:07:37.801 Write (01h): Supported LBA-Change 00:07:37.801 Read (02h): Supported 00:07:37.801 Compare (05h): Supported 00:07:37.801 Write Zeroes (08h): Supported LBA-Change 00:07:37.801 Dataset Management (09h): Supported LBA-Change 00:07:37.801 Unknown (0Ch): Supported 00:07:37.801 Unknown (12h): Supported 00:07:37.801 Copy (19h): Supported LBA-Change 00:07:37.801 Unknown (1Dh): Supported LBA-Change 00:07:37.801 00:07:37.801 Error Log 00:07:37.801 ========= 00:07:37.801 00:07:37.801 Arbitration 00:07:37.801 =========== 00:07:37.801 Arbitration Burst: no limit 00:07:37.801 00:07:37.801 Power Management 00:07:37.801 ================ 00:07:37.801 Number of Power States: 1 00:07:37.801 Current Power State: Power State #0 00:07:37.801 Power State #0: 00:07:37.801 Max Power: 25.00 W 00:07:37.801 Non-Operational State: Operational 00:07:37.801 Entry Latency: 16 microseconds 00:07:37.801 Exit Latency: 4 microseconds 00:07:37.801 Relative Read Throughput: 0 00:07:37.801 Relative Read Latency: 0 00:07:37.801 Relative Write Throughput: 0 00:07:37.801 Relative Write Latency: 0 00:07:37.801 Idle Power: Not Reported 00:07:37.801 Active Power: Not Reported 00:07:37.801 Non-Operational Permissive Mode: Not Supported 00:07:37.801 00:07:37.801 Health Information 00:07:37.801 ================== 00:07:37.801 Critical Warnings: 00:07:37.801 Available Spare Space: OK 00:07:37.801 Temperature: OK 00:07:37.801 Device Reliability: OK 00:07:37.801 Read Only: No 00:07:37.801 Volatile Memory Backup: OK 00:07:37.801 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.801 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.801 Available Spare: 0% 00:07:37.801 Available Spare Threshold: 0% 00:07:37.801 Life Percentage Used: 0% 00:07:37.801 Data Units Read: 1159 00:07:37.801 Data Units Written: 1026 00:07:37.801 Host Read Commands: 61392 00:07:37.801 Host Write Commands: 60163 00:07:37.801 Controller Busy Time: 0 minutes 00:07:37.801 Power Cycles: 0 00:07:37.801 Power On Hours: 0 hours 00:07:37.801 Unsafe Shutdowns: 0 00:07:37.801 Unrecoverable Media Errors: 0 00:07:37.801 Lifetime Error Log Entries: 0 00:07:37.801 Warning Temperature Time: 0 minutes 00:07:37.801 Critical Temperature Time: 0 minutes 00:07:37.801 00:07:37.801 Number of Queues 00:07:37.801 ================ 00:07:37.801 Number of I/O Submission Queues: 64 00:07:37.801 Number of I/O Completion Queues: 64 00:07:37.801 00:07:37.801 ZNS Specific Controller Data 00:07:37.801 ============================ 00:07:37.801 Zone Append Size Limit: 0 00:07:37.801 00:07:37.801 00:07:37.801 Active Namespaces 00:07:37.801 ================= 00:07:37.801 Namespace ID:1 00:07:37.801 Error Recovery Timeout: Unlimited 00:07:37.801 Command Set Identifier: NVM (00h) 00:07:37.801 Deallocate: Supported 00:07:37.801 Deallocated/Unwritten Error: Supported 00:07:37.801 Deallocated Read Value: All 0x00 00:07:37.801 Deallocate in Write Zeroes: Not Supported 00:07:37.801 Deallocated Guard Field: 0xFFFF 00:07:37.801 Flush: Supported 00:07:37.801 Reservation: Not Supported 00:07:37.801 Namespace Sharing Capabilities: Private 00:07:37.801 Size (in LBAs): 1310720 (5GiB) 00:07:37.801 Capacity (in LBAs): 1310720 (5GiB) 00:07:37.801 Utilization (in LBAs): 1310720 (5GiB) 00:07:37.801 Thin Provisioning: Not Supported 00:07:37.801 Per-NS Atomic Units: No 00:07:37.801 Maximum Single Source Range Length: 128 00:07:37.801 Maximum Copy Length: 128 00:07:37.801 Maximum Source Range Count: 128 00:07:37.801 NGUID/EUI64 Never Reused: No 00:07:37.801 Namespace Write Protected: No 00:07:37.801 Number of LBA Formats: 8 00:07:37.801 Current LBA Format: LBA Format #04 00:07:37.801 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.801 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.801 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.801 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.801 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.801 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.801 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.801 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.801 00:07:37.801 NVM Specific Namespace Data 00:07:37.801 =========================== 00:07:37.801 Logical Block Storage Tag Mask: 0 00:07:37.801 Protection Information Capabilities: 00:07:37.801 16b Guard Protection Information Storage Tag Support: No 00:07:37.801 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.801 Storage Tag Check Read Support: No 00:07:37.801 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.801 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.801 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.801 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.801 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.801 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.801 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.801 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.801 11:50:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.801 11:50:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:38.063 ===================================================== 00:07:38.063 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:38.063 ===================================================== 00:07:38.063 Controller Capabilities/Features 00:07:38.063 ================================ 00:07:38.063 Vendor ID: 1b36 00:07:38.063 Subsystem Vendor ID: 1af4 00:07:38.063 Serial Number: 12342 00:07:38.063 Model Number: QEMU NVMe Ctrl 00:07:38.063 Firmware Version: 8.0.0 00:07:38.063 Recommended Arb Burst: 6 00:07:38.063 IEEE OUI Identifier: 00 54 52 00:07:38.063 Multi-path I/O 00:07:38.063 May have multiple subsystem ports: No 00:07:38.063 May have multiple controllers: No 00:07:38.063 Associated with SR-IOV VF: No 00:07:38.063 Max Data Transfer Size: 524288 00:07:38.063 Max Number of Namespaces: 256 00:07:38.063 Max Number of I/O Queues: 64 00:07:38.063 NVMe Specification Version (VS): 1.4 00:07:38.063 NVMe Specification Version (Identify): 1.4 00:07:38.063 Maximum Queue Entries: 2048 00:07:38.063 Contiguous Queues Required: Yes 00:07:38.063 Arbitration Mechanisms Supported 00:07:38.063 Weighted Round Robin: Not Supported 00:07:38.063 Vendor Specific: Not Supported 00:07:38.063 Reset Timeout: 7500 ms 00:07:38.063 Doorbell Stride: 4 bytes 00:07:38.063 NVM Subsystem Reset: Not Supported 00:07:38.063 Command Sets Supported 00:07:38.063 NVM Command Set: Supported 00:07:38.063 Boot Partition: Not Supported 00:07:38.063 Memory Page Size Minimum: 4096 bytes 00:07:38.063 Memory Page Size Maximum: 65536 bytes 00:07:38.063 Persistent Memory Region: Not Supported 00:07:38.063 Optional Asynchronous Events Supported 00:07:38.063 Namespace Attribute Notices: Supported 00:07:38.063 Firmware Activation Notices: Not Supported 00:07:38.063 ANA Change Notices: Not Supported 00:07:38.063 PLE Aggregate Log Change Notices: Not Supported 00:07:38.063 LBA Status Info Alert Notices: Not Supported 00:07:38.063 EGE Aggregate Log Change Notices: Not Supported 00:07:38.063 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.063 Zone Descriptor Change Notices: Not Supported 00:07:38.063 Discovery Log Change Notices: Not Supported 00:07:38.063 Controller Attributes 00:07:38.063 128-bit Host Identifier: Not Supported 00:07:38.063 Non-Operational Permissive Mode: Not Supported 00:07:38.063 NVM Sets: Not Supported 00:07:38.063 Read Recovery Levels: Not Supported 00:07:38.063 Endurance Groups: Not Supported 00:07:38.063 Predictable Latency Mode: Not Supported 00:07:38.063 Traffic Based Keep ALive: Not Supported 00:07:38.063 Namespace Granularity: Not Supported 00:07:38.063 SQ Associations: Not Supported 00:07:38.063 UUID List: Not Supported 00:07:38.063 Multi-Domain Subsystem: Not Supported 00:07:38.063 Fixed Capacity Management: Not Supported 00:07:38.063 Variable Capacity Management: Not Supported 00:07:38.063 Delete Endurance Group: Not Supported 00:07:38.063 Delete NVM Set: Not Supported 00:07:38.063 Extended LBA Formats Supported: Supported 00:07:38.063 Flexible Data Placement Supported: Not Supported 00:07:38.063 00:07:38.063 Controller Memory Buffer Support 00:07:38.063 ================================ 00:07:38.063 Supported: No 00:07:38.063 00:07:38.063 Persistent Memory Region Support 00:07:38.063 ================================ 00:07:38.063 Supported: No 00:07:38.063 00:07:38.063 Admin Command Set Attributes 00:07:38.063 ============================ 00:07:38.063 Security Send/Receive: Not Supported 00:07:38.063 Format NVM: Supported 00:07:38.063 Firmware Activate/Download: Not Supported 00:07:38.063 Namespace Management: Supported 00:07:38.063 Device Self-Test: Not Supported 00:07:38.063 Directives: Supported 00:07:38.064 NVMe-MI: Not Supported 00:07:38.064 Virtualization Management: Not Supported 00:07:38.064 Doorbell Buffer Config: Supported 00:07:38.064 Get LBA Status Capability: Not Supported 00:07:38.064 Command & Feature Lockdown Capability: Not Supported 00:07:38.064 Abort Command Limit: 4 00:07:38.064 Async Event Request Limit: 4 00:07:38.064 Number of Firmware Slots: N/A 00:07:38.064 Firmware Slot 1 Read-Only: N/A 00:07:38.064 Firmware Activation Without Reset: N/A 00:07:38.064 Multiple Update Detection Support: N/A 00:07:38.064 Firmware Update Granularity: No Information Provided 00:07:38.064 Per-Namespace SMART Log: Yes 00:07:38.064 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.064 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:38.064 Command Effects Log Page: Supported 00:07:38.064 Get Log Page Extended Data: Supported 00:07:38.064 Telemetry Log Pages: Not Supported 00:07:38.064 Persistent Event Log Pages: Not Supported 00:07:38.064 Supported Log Pages Log Page: May Support 00:07:38.064 Commands Supported & Effects Log Page: Not Supported 00:07:38.064 Feature Identifiers & Effects Log Page:May Support 00:07:38.064 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.064 Data Area 4 for Telemetry Log: Not Supported 00:07:38.064 Error Log Page Entries Supported: 1 00:07:38.064 Keep Alive: Not Supported 00:07:38.064 00:07:38.064 NVM Command Set Attributes 00:07:38.064 ========================== 00:07:38.064 Submission Queue Entry Size 00:07:38.064 Max: 64 00:07:38.064 Min: 64 00:07:38.064 Completion Queue Entry Size 00:07:38.064 Max: 16 00:07:38.064 Min: 16 00:07:38.064 Number of Namespaces: 256 00:07:38.064 Compare Command: Supported 00:07:38.064 Write Uncorrectable Command: Not Supported 00:07:38.064 Dataset Management Command: Supported 00:07:38.064 Write Zeroes Command: Supported 00:07:38.064 Set Features Save Field: Supported 00:07:38.064 Reservations: Not Supported 00:07:38.064 Timestamp: Supported 00:07:38.064 Copy: Supported 00:07:38.064 Volatile Write Cache: Present 00:07:38.064 Atomic Write Unit (Normal): 1 00:07:38.064 Atomic Write Unit (PFail): 1 00:07:38.064 Atomic Compare & Write Unit: 1 00:07:38.064 Fused Compare & Write: Not Supported 00:07:38.064 Scatter-Gather List 00:07:38.064 SGL Command Set: Supported 00:07:38.064 SGL Keyed: Not Supported 00:07:38.064 SGL Bit Bucket Descriptor: Not Supported 00:07:38.064 SGL Metadata Pointer: Not Supported 00:07:38.064 Oversized SGL: Not Supported 00:07:38.064 SGL Metadata Address: Not Supported 00:07:38.064 SGL Offset: Not Supported 00:07:38.064 Transport SGL Data Block: Not Supported 00:07:38.064 Replay Protected Memory Block: Not Supported 00:07:38.064 00:07:38.064 Firmware Slot Information 00:07:38.064 ========================= 00:07:38.064 Active slot: 1 00:07:38.064 Slot 1 Firmware Revision: 1.0 00:07:38.064 00:07:38.064 00:07:38.064 Commands Supported and Effects 00:07:38.064 ============================== 00:07:38.064 Admin Commands 00:07:38.064 -------------- 00:07:38.064 Delete I/O Submission Queue (00h): Supported 00:07:38.064 Create I/O Submission Queue (01h): Supported 00:07:38.064 Get Log Page (02h): Supported 00:07:38.064 Delete I/O Completion Queue (04h): Supported 00:07:38.064 Create I/O Completion Queue (05h): Supported 00:07:38.064 Identify (06h): Supported 00:07:38.064 Abort (08h): Supported 00:07:38.064 Set Features (09h): Supported 00:07:38.064 Get Features (0Ah): Supported 00:07:38.064 Asynchronous Event Request (0Ch): Supported 00:07:38.064 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.064 Directive Send (19h): Supported 00:07:38.064 Directive Receive (1Ah): Supported 00:07:38.064 Virtualization Management (1Ch): Supported 00:07:38.064 Doorbell Buffer Config (7Ch): Supported 00:07:38.064 Format NVM (80h): Supported LBA-Change 00:07:38.064 I/O Commands 00:07:38.064 ------------ 00:07:38.064 Flush (00h): Supported LBA-Change 00:07:38.064 Write (01h): Supported LBA-Change 00:07:38.064 Read (02h): Supported 00:07:38.064 Compare (05h): Supported 00:07:38.064 Write Zeroes (08h): Supported LBA-Change 00:07:38.064 Dataset Management (09h): Supported LBA-Change 00:07:38.064 Unknown (0Ch): Supported 00:07:38.064 Unknown (12h): Supported 00:07:38.064 Copy (19h): Supported LBA-Change 00:07:38.064 Unknown (1Dh): Supported LBA-Change 00:07:38.064 00:07:38.064 Error Log 00:07:38.064 ========= 00:07:38.064 00:07:38.064 Arbitration 00:07:38.064 =========== 00:07:38.064 Arbitration Burst: no limit 00:07:38.064 00:07:38.064 Power Management 00:07:38.064 ================ 00:07:38.064 Number of Power States: 1 00:07:38.064 Current Power State: Power State #0 00:07:38.064 Power State #0: 00:07:38.064 Max Power: 25.00 W 00:07:38.064 Non-Operational State: Operational 00:07:38.064 Entry Latency: 16 microseconds 00:07:38.064 Exit Latency: 4 microseconds 00:07:38.064 Relative Read Throughput: 0 00:07:38.064 Relative Read Latency: 0 00:07:38.064 Relative Write Throughput: 0 00:07:38.064 Relative Write Latency: 0 00:07:38.064 Idle Power: Not Reported 00:07:38.064 Active Power: Not Reported 00:07:38.064 Non-Operational Permissive Mode: Not Supported 00:07:38.064 00:07:38.064 Health Information 00:07:38.064 ================== 00:07:38.064 Critical Warnings: 00:07:38.064 Available Spare Space: OK 00:07:38.064 Temperature: OK 00:07:38.064 Device Reliability: OK 00:07:38.064 Read Only: No 00:07:38.064 Volatile Memory Backup: OK 00:07:38.064 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.064 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.064 Available Spare: 0% 00:07:38.064 Available Spare Threshold: 0% 00:07:38.064 Life Percentage Used: 0% 00:07:38.064 Data Units Read: 2465 00:07:38.064 Data Units Written: 2252 00:07:38.064 Host Read Commands: 127497 00:07:38.064 Host Write Commands: 125767 00:07:38.064 Controller Busy Time: 0 minutes 00:07:38.064 Power Cycles: 0 00:07:38.064 Power On Hours: 0 hours 00:07:38.064 Unsafe Shutdowns: 0 00:07:38.064 Unrecoverable Media Errors: 0 00:07:38.064 Lifetime Error Log Entries: 0 00:07:38.064 Warning Temperature Time: 0 minutes 00:07:38.064 Critical Temperature Time: 0 minutes 00:07:38.064 00:07:38.064 Number of Queues 00:07:38.064 ================ 00:07:38.064 Number of I/O Submission Queues: 64 00:07:38.064 Number of I/O Completion Queues: 64 00:07:38.064 00:07:38.064 ZNS Specific Controller Data 00:07:38.064 ============================ 00:07:38.064 Zone Append Size Limit: 0 00:07:38.064 00:07:38.064 00:07:38.064 Active Namespaces 00:07:38.064 ================= 00:07:38.064 Namespace ID:1 00:07:38.064 Error Recovery Timeout: Unlimited 00:07:38.064 Command Set Identifier: NVM (00h) 00:07:38.064 Deallocate: Supported 00:07:38.064 Deallocated/Unwritten Error: Supported 00:07:38.064 Deallocated Read Value: All 0x00 00:07:38.064 Deallocate in Write Zeroes: Not Supported 00:07:38.064 Deallocated Guard Field: 0xFFFF 00:07:38.064 Flush: Supported 00:07:38.064 Reservation: Not Supported 00:07:38.064 Namespace Sharing Capabilities: Private 00:07:38.064 Size (in LBAs): 1048576 (4GiB) 00:07:38.064 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.064 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.064 Thin Provisioning: Not Supported 00:07:38.064 Per-NS Atomic Units: No 00:07:38.064 Maximum Single Source Range Length: 128 00:07:38.064 Maximum Copy Length: 128 00:07:38.064 Maximum Source Range Count: 128 00:07:38.064 NGUID/EUI64 Never Reused: No 00:07:38.064 Namespace Write Protected: No 00:07:38.064 Number of LBA Formats: 8 00:07:38.064 Current LBA Format: LBA Format #04 00:07:38.064 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.064 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.064 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.064 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.064 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.064 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.064 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.064 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.064 00:07:38.064 NVM Specific Namespace Data 00:07:38.064 =========================== 00:07:38.065 Logical Block Storage Tag Mask: 0 00:07:38.065 Protection Information Capabilities: 00:07:38.065 16b Guard Protection Information Storage Tag Support: No 00:07:38.065 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.065 Storage Tag Check Read Support: No 00:07:38.065 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Namespace ID:2 00:07:38.065 Error Recovery Timeout: Unlimited 00:07:38.065 Command Set Identifier: NVM (00h) 00:07:38.065 Deallocate: Supported 00:07:38.065 Deallocated/Unwritten Error: Supported 00:07:38.065 Deallocated Read Value: All 0x00 00:07:38.065 Deallocate in Write Zeroes: Not Supported 00:07:38.065 Deallocated Guard Field: 0xFFFF 00:07:38.065 Flush: Supported 00:07:38.065 Reservation: Not Supported 00:07:38.065 Namespace Sharing Capabilities: Private 00:07:38.065 Size (in LBAs): 1048576 (4GiB) 00:07:38.065 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.065 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.065 Thin Provisioning: Not Supported 00:07:38.065 Per-NS Atomic Units: No 00:07:38.065 Maximum Single Source Range Length: 128 00:07:38.065 Maximum Copy Length: 128 00:07:38.065 Maximum Source Range Count: 128 00:07:38.065 NGUID/EUI64 Never Reused: No 00:07:38.065 Namespace Write Protected: No 00:07:38.065 Number of LBA Formats: 8 00:07:38.065 Current LBA Format: LBA Format #04 00:07:38.065 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.065 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.065 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.065 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.065 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.065 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.065 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.065 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.065 00:07:38.065 NVM Specific Namespace Data 00:07:38.065 =========================== 00:07:38.065 Logical Block Storage Tag Mask: 0 00:07:38.065 Protection Information Capabilities: 00:07:38.065 16b Guard Protection Information Storage Tag Support: No 00:07:38.065 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.065 Storage Tag Check Read Support: No 00:07:38.065 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Namespace ID:3 00:07:38.065 Error Recovery Timeout: Unlimited 00:07:38.065 Command Set Identifier: NVM (00h) 00:07:38.065 Deallocate: Supported 00:07:38.065 Deallocated/Unwritten Error: Supported 00:07:38.065 Deallocated Read Value: All 0x00 00:07:38.065 Deallocate in Write Zeroes: Not Supported 00:07:38.065 Deallocated Guard Field: 0xFFFF 00:07:38.065 Flush: Supported 00:07:38.065 Reservation: Not Supported 00:07:38.065 Namespace Sharing Capabilities: Private 00:07:38.065 Size (in LBAs): 1048576 (4GiB) 00:07:38.065 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.065 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.065 Thin Provisioning: Not Supported 00:07:38.065 Per-NS Atomic Units: No 00:07:38.065 Maximum Single Source Range Length: 128 00:07:38.065 Maximum Copy Length: 128 00:07:38.065 Maximum Source Range Count: 128 00:07:38.065 NGUID/EUI64 Never Reused: No 00:07:38.065 Namespace Write Protected: No 00:07:38.065 Number of LBA Formats: 8 00:07:38.065 Current LBA Format: LBA Format #04 00:07:38.065 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.065 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.065 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.065 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.065 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.065 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.065 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.065 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.065 00:07:38.065 NVM Specific Namespace Data 00:07:38.065 =========================== 00:07:38.065 Logical Block Storage Tag Mask: 0 00:07:38.065 Protection Information Capabilities: 00:07:38.065 16b Guard Protection Information Storage Tag Support: No 00:07:38.065 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.065 Storage Tag Check Read Support: No 00:07:38.065 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.065 11:50:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:38.065 11:50:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:38.341 ===================================================== 00:07:38.341 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:38.341 ===================================================== 00:07:38.341 Controller Capabilities/Features 00:07:38.341 ================================ 00:07:38.341 Vendor ID: 1b36 00:07:38.341 Subsystem Vendor ID: 1af4 00:07:38.341 Serial Number: 12343 00:07:38.341 Model Number: QEMU NVMe Ctrl 00:07:38.341 Firmware Version: 8.0.0 00:07:38.341 Recommended Arb Burst: 6 00:07:38.341 IEEE OUI Identifier: 00 54 52 00:07:38.341 Multi-path I/O 00:07:38.341 May have multiple subsystem ports: No 00:07:38.341 May have multiple controllers: Yes 00:07:38.341 Associated with SR-IOV VF: No 00:07:38.341 Max Data Transfer Size: 524288 00:07:38.341 Max Number of Namespaces: 256 00:07:38.341 Max Number of I/O Queues: 64 00:07:38.341 NVMe Specification Version (VS): 1.4 00:07:38.341 NVMe Specification Version (Identify): 1.4 00:07:38.341 Maximum Queue Entries: 2048 00:07:38.341 Contiguous Queues Required: Yes 00:07:38.341 Arbitration Mechanisms Supported 00:07:38.341 Weighted Round Robin: Not Supported 00:07:38.341 Vendor Specific: Not Supported 00:07:38.341 Reset Timeout: 7500 ms 00:07:38.341 Doorbell Stride: 4 bytes 00:07:38.341 NVM Subsystem Reset: Not Supported 00:07:38.341 Command Sets Supported 00:07:38.341 NVM Command Set: Supported 00:07:38.341 Boot Partition: Not Supported 00:07:38.341 Memory Page Size Minimum: 4096 bytes 00:07:38.341 Memory Page Size Maximum: 65536 bytes 00:07:38.341 Persistent Memory Region: Not Supported 00:07:38.341 Optional Asynchronous Events Supported 00:07:38.341 Namespace Attribute Notices: Supported 00:07:38.341 Firmware Activation Notices: Not Supported 00:07:38.341 ANA Change Notices: Not Supported 00:07:38.341 PLE Aggregate Log Change Notices: Not Supported 00:07:38.341 LBA Status Info Alert Notices: Not Supported 00:07:38.341 EGE Aggregate Log Change Notices: Not Supported 00:07:38.341 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.341 Zone Descriptor Change Notices: Not Supported 00:07:38.341 Discovery Log Change Notices: Not Supported 00:07:38.341 Controller Attributes 00:07:38.341 128-bit Host Identifier: Not Supported 00:07:38.341 Non-Operational Permissive Mode: Not Supported 00:07:38.341 NVM Sets: Not Supported 00:07:38.341 Read Recovery Levels: Not Supported 00:07:38.341 Endurance Groups: Supported 00:07:38.341 Predictable Latency Mode: Not Supported 00:07:38.341 Traffic Based Keep ALive: Not Supported 00:07:38.341 Namespace Granularity: Not Supported 00:07:38.341 SQ Associations: Not Supported 00:07:38.341 UUID List: Not Supported 00:07:38.341 Multi-Domain Subsystem: Not Supported 00:07:38.341 Fixed Capacity Management: Not Supported 00:07:38.341 Variable Capacity Management: Not Supported 00:07:38.341 Delete Endurance Group: Not Supported 00:07:38.341 Delete NVM Set: Not Supported 00:07:38.341 Extended LBA Formats Supported: Supported 00:07:38.341 Flexible Data Placement Supported: Supported 00:07:38.341 00:07:38.341 Controller Memory Buffer Support 00:07:38.341 ================================ 00:07:38.341 Supported: No 00:07:38.341 00:07:38.341 Persistent Memory Region Support 00:07:38.341 ================================ 00:07:38.341 Supported: No 00:07:38.341 00:07:38.341 Admin Command Set Attributes 00:07:38.341 ============================ 00:07:38.341 Security Send/Receive: Not Supported 00:07:38.341 Format NVM: Supported 00:07:38.341 Firmware Activate/Download: Not Supported 00:07:38.341 Namespace Management: Supported 00:07:38.341 Device Self-Test: Not Supported 00:07:38.341 Directives: Supported 00:07:38.341 NVMe-MI: Not Supported 00:07:38.341 Virtualization Management: Not Supported 00:07:38.341 Doorbell Buffer Config: Supported 00:07:38.341 Get LBA Status Capability: Not Supported 00:07:38.341 Command & Feature Lockdown Capability: Not Supported 00:07:38.341 Abort Command Limit: 4 00:07:38.341 Async Event Request Limit: 4 00:07:38.341 Number of Firmware Slots: N/A 00:07:38.341 Firmware Slot 1 Read-Only: N/A 00:07:38.341 Firmware Activation Without Reset: N/A 00:07:38.341 Multiple Update Detection Support: N/A 00:07:38.341 Firmware Update Granularity: No Information Provided 00:07:38.341 Per-Namespace SMART Log: Yes 00:07:38.341 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.341 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:38.341 Command Effects Log Page: Supported 00:07:38.341 Get Log Page Extended Data: Supported 00:07:38.341 Telemetry Log Pages: Not Supported 00:07:38.341 Persistent Event Log Pages: Not Supported 00:07:38.341 Supported Log Pages Log Page: May Support 00:07:38.341 Commands Supported & Effects Log Page: Not Supported 00:07:38.341 Feature Identifiers & Effects Log Page:May Support 00:07:38.341 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.341 Data Area 4 for Telemetry Log: Not Supported 00:07:38.341 Error Log Page Entries Supported: 1 00:07:38.341 Keep Alive: Not Supported 00:07:38.341 00:07:38.341 NVM Command Set Attributes 00:07:38.341 ========================== 00:07:38.341 Submission Queue Entry Size 00:07:38.341 Max: 64 00:07:38.341 Min: 64 00:07:38.341 Completion Queue Entry Size 00:07:38.341 Max: 16 00:07:38.341 Min: 16 00:07:38.341 Number of Namespaces: 256 00:07:38.341 Compare Command: Supported 00:07:38.341 Write Uncorrectable Command: Not Supported 00:07:38.341 Dataset Management Command: Supported 00:07:38.341 Write Zeroes Command: Supported 00:07:38.341 Set Features Save Field: Supported 00:07:38.341 Reservations: Not Supported 00:07:38.341 Timestamp: Supported 00:07:38.341 Copy: Supported 00:07:38.341 Volatile Write Cache: Present 00:07:38.341 Atomic Write Unit (Normal): 1 00:07:38.341 Atomic Write Unit (PFail): 1 00:07:38.341 Atomic Compare & Write Unit: 1 00:07:38.341 Fused Compare & Write: Not Supported 00:07:38.341 Scatter-Gather List 00:07:38.341 SGL Command Set: Supported 00:07:38.341 SGL Keyed: Not Supported 00:07:38.341 SGL Bit Bucket Descriptor: Not Supported 00:07:38.341 SGL Metadata Pointer: Not Supported 00:07:38.341 Oversized SGL: Not Supported 00:07:38.341 SGL Metadata Address: Not Supported 00:07:38.341 SGL Offset: Not Supported 00:07:38.341 Transport SGL Data Block: Not Supported 00:07:38.341 Replay Protected Memory Block: Not Supported 00:07:38.341 00:07:38.341 Firmware Slot Information 00:07:38.341 ========================= 00:07:38.341 Active slot: 1 00:07:38.341 Slot 1 Firmware Revision: 1.0 00:07:38.341 00:07:38.341 00:07:38.342 Commands Supported and Effects 00:07:38.342 ============================== 00:07:38.342 Admin Commands 00:07:38.342 -------------- 00:07:38.342 Delete I/O Submission Queue (00h): Supported 00:07:38.342 Create I/O Submission Queue (01h): Supported 00:07:38.342 Get Log Page (02h): Supported 00:07:38.342 Delete I/O Completion Queue (04h): Supported 00:07:38.342 Create I/O Completion Queue (05h): Supported 00:07:38.342 Identify (06h): Supported 00:07:38.342 Abort (08h): Supported 00:07:38.342 Set Features (09h): Supported 00:07:38.342 Get Features (0Ah): Supported 00:07:38.342 Asynchronous Event Request (0Ch): Supported 00:07:38.342 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.342 Directive Send (19h): Supported 00:07:38.342 Directive Receive (1Ah): Supported 00:07:38.342 Virtualization Management (1Ch): Supported 00:07:38.342 Doorbell Buffer Config (7Ch): Supported 00:07:38.342 Format NVM (80h): Supported LBA-Change 00:07:38.342 I/O Commands 00:07:38.342 ------------ 00:07:38.342 Flush (00h): Supported LBA-Change 00:07:38.342 Write (01h): Supported LBA-Change 00:07:38.342 Read (02h): Supported 00:07:38.342 Compare (05h): Supported 00:07:38.342 Write Zeroes (08h): Supported LBA-Change 00:07:38.342 Dataset Management (09h): Supported LBA-Change 00:07:38.342 Unknown (0Ch): Supported 00:07:38.342 Unknown (12h): Supported 00:07:38.342 Copy (19h): Supported LBA-Change 00:07:38.342 Unknown (1Dh): Supported LBA-Change 00:07:38.342 00:07:38.342 Error Log 00:07:38.342 ========= 00:07:38.342 00:07:38.342 Arbitration 00:07:38.342 =========== 00:07:38.342 Arbitration Burst: no limit 00:07:38.342 00:07:38.342 Power Management 00:07:38.342 ================ 00:07:38.342 Number of Power States: 1 00:07:38.342 Current Power State: Power State #0 00:07:38.342 Power State #0: 00:07:38.342 Max Power: 25.00 W 00:07:38.342 Non-Operational State: Operational 00:07:38.342 Entry Latency: 16 microseconds 00:07:38.342 Exit Latency: 4 microseconds 00:07:38.342 Relative Read Throughput: 0 00:07:38.342 Relative Read Latency: 0 00:07:38.342 Relative Write Throughput: 0 00:07:38.342 Relative Write Latency: 0 00:07:38.342 Idle Power: Not Reported 00:07:38.342 Active Power: Not Reported 00:07:38.342 Non-Operational Permissive Mode: Not Supported 00:07:38.342 00:07:38.342 Health Information 00:07:38.342 ================== 00:07:38.342 Critical Warnings: 00:07:38.342 Available Spare Space: OK 00:07:38.342 Temperature: OK 00:07:38.342 Device Reliability: OK 00:07:38.342 Read Only: No 00:07:38.342 Volatile Memory Backup: OK 00:07:38.342 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.342 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.342 Available Spare: 0% 00:07:38.342 Available Spare Threshold: 0% 00:07:38.342 Life Percentage Used: 0% 00:07:38.342 Data Units Read: 958 00:07:38.342 Data Units Written: 887 00:07:38.342 Host Read Commands: 43624 00:07:38.342 Host Write Commands: 43049 00:07:38.342 Controller Busy Time: 0 minutes 00:07:38.342 Power Cycles: 0 00:07:38.342 Power On Hours: 0 hours 00:07:38.342 Unsafe Shutdowns: 0 00:07:38.342 Unrecoverable Media Errors: 0 00:07:38.342 Lifetime Error Log Entries: 0 00:07:38.342 Warning Temperature Time: 0 minutes 00:07:38.342 Critical Temperature Time: 0 minutes 00:07:38.342 00:07:38.342 Number of Queues 00:07:38.342 ================ 00:07:38.342 Number of I/O Submission Queues: 64 00:07:38.342 Number of I/O Completion Queues: 64 00:07:38.342 00:07:38.342 ZNS Specific Controller Data 00:07:38.342 ============================ 00:07:38.342 Zone Append Size Limit: 0 00:07:38.342 00:07:38.342 00:07:38.342 Active Namespaces 00:07:38.342 ================= 00:07:38.342 Namespace ID:1 00:07:38.342 Error Recovery Timeout: Unlimited 00:07:38.342 Command Set Identifier: NVM (00h) 00:07:38.342 Deallocate: Supported 00:07:38.342 Deallocated/Unwritten Error: Supported 00:07:38.342 Deallocated Read Value: All 0x00 00:07:38.342 Deallocate in Write Zeroes: Not Supported 00:07:38.342 Deallocated Guard Field: 0xFFFF 00:07:38.342 Flush: Supported 00:07:38.342 Reservation: Not Supported 00:07:38.342 Namespace Sharing Capabilities: Multiple Controllers 00:07:38.342 Size (in LBAs): 262144 (1GiB) 00:07:38.342 Capacity (in LBAs): 262144 (1GiB) 00:07:38.342 Utilization (in LBAs): 262144 (1GiB) 00:07:38.342 Thin Provisioning: Not Supported 00:07:38.342 Per-NS Atomic Units: No 00:07:38.342 Maximum Single Source Range Length: 128 00:07:38.342 Maximum Copy Length: 128 00:07:38.342 Maximum Source Range Count: 128 00:07:38.342 NGUID/EUI64 Never Reused: No 00:07:38.342 Namespace Write Protected: No 00:07:38.342 Endurance group ID: 1 00:07:38.342 Number of LBA Formats: 8 00:07:38.342 Current LBA Format: LBA Format #04 00:07:38.342 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.342 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.342 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.342 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.342 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.342 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.342 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.342 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.342 00:07:38.342 Get Feature FDP: 00:07:38.342 ================ 00:07:38.342 Enabled: Yes 00:07:38.342 FDP configuration index: 0 00:07:38.342 00:07:38.342 FDP configurations log page 00:07:38.342 =========================== 00:07:38.342 Number of FDP configurations: 1 00:07:38.342 Version: 0 00:07:38.342 Size: 112 00:07:38.342 FDP Configuration Descriptor: 0 00:07:38.342 Descriptor Size: 96 00:07:38.342 Reclaim Group Identifier format: 2 00:07:38.342 FDP Volatile Write Cache: Not Present 00:07:38.342 FDP Configuration: Valid 00:07:38.342 Vendor Specific Size: 0 00:07:38.342 Number of Reclaim Groups: 2 00:07:38.342 Number of Recalim Unit Handles: 8 00:07:38.342 Max Placement Identifiers: 128 00:07:38.342 Number of Namespaces Suppprted: 256 00:07:38.342 Reclaim unit Nominal Size: 6000000 bytes 00:07:38.342 Estimated Reclaim Unit Time Limit: Not Reported 00:07:38.342 RUH Desc #000: RUH Type: Initially Isolated 00:07:38.342 RUH Desc #001: RUH Type: Initially Isolated 00:07:38.342 RUH Desc #002: RUH Type: Initially Isolated 00:07:38.342 RUH Desc #003: RUH Type: Initially Isolated 00:07:38.342 RUH Desc #004: RUH Type: Initially Isolated 00:07:38.342 RUH Desc #005: RUH Type: Initially Isolated 00:07:38.342 RUH Desc #006: RUH Type: Initially Isolated 00:07:38.342 RUH Desc #007: RUH Type: Initially Isolated 00:07:38.342 00:07:38.342 FDP reclaim unit handle usage log page 00:07:38.342 ====================================== 00:07:38.342 Number of Reclaim Unit Handles: 8 00:07:38.342 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:38.342 RUH Usage Desc #001: RUH Attributes: Unused 00:07:38.342 RUH Usage Desc #002: RUH Attributes: Unused 00:07:38.342 RUH Usage Desc #003: RUH Attributes: Unused 00:07:38.342 RUH Usage Desc #004: RUH Attributes: Unused 00:07:38.342 RUH Usage Desc #005: RUH Attributes: Unused 00:07:38.342 RUH Usage Desc #006: RUH Attributes: Unused 00:07:38.342 RUH Usage Desc #007: RUH Attributes: Unused 00:07:38.342 00:07:38.342 FDP statistics log page 00:07:38.342 ======================= 00:07:38.342 Host bytes with metadata written: 559194112 00:07:38.342 Media bytes with metadata written: 559271936 00:07:38.342 Media bytes erased: 0 00:07:38.342 00:07:38.342 FDP events log page 00:07:38.342 =================== 00:07:38.342 Number of FDP events: 0 00:07:38.342 00:07:38.342 NVM Specific Namespace Data 00:07:38.342 =========================== 00:07:38.342 Logical Block Storage Tag Mask: 0 00:07:38.342 Protection Information Capabilities: 00:07:38.342 16b Guard Protection Information Storage Tag Support: No 00:07:38.342 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.342 Storage Tag Check Read Support: No 00:07:38.342 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.342 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.342 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.342 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.342 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.342 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.342 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.342 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.342 00:07:38.342 real 0m1.221s 00:07:38.342 user 0m0.434s 00:07:38.342 sys 0m0.543s 00:07:38.343 11:50:35 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.343 11:50:35 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:38.343 ************************************ 00:07:38.343 END TEST nvme_identify 00:07:38.343 ************************************ 00:07:38.343 11:50:35 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:38.343 11:50:35 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:38.343 11:50:35 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.343 11:50:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.343 ************************************ 00:07:38.343 START TEST nvme_perf 00:07:38.343 ************************************ 00:07:38.343 11:50:35 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:07:38.343 11:50:35 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:39.721 Initializing NVMe Controllers 00:07:39.721 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.721 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.721 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:39.721 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.721 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:39.721 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:39.721 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:39.721 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:39.721 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:39.721 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:39.721 Initialization complete. Launching workers. 00:07:39.721 ======================================================== 00:07:39.721 Latency(us) 00:07:39.721 Device Information : IOPS MiB/s Average min max 00:07:39.721 PCIE (0000:00:11.0) NSID 1 from core 0: 14174.73 166.11 9042.78 5663.45 40435.02 00:07:39.721 PCIE (0000:00:13.0) NSID 1 from core 0: 14174.73 166.11 9029.72 5618.72 39989.07 00:07:39.721 PCIE (0000:00:10.0) NSID 1 from core 0: 14174.73 166.11 9014.76 5563.88 39098.82 00:07:39.721 PCIE (0000:00:12.0) NSID 1 from core 0: 14174.73 166.11 9001.28 5617.29 37756.47 00:07:39.721 PCIE (0000:00:12.0) NSID 2 from core 0: 14174.73 166.11 8987.01 5646.03 36931.13 00:07:39.721 PCIE (0000:00:12.0) NSID 3 from core 0: 14238.58 166.86 8932.60 5670.33 28099.65 00:07:39.721 ======================================================== 00:07:39.721 Total : 85112.24 997.41 9001.31 5563.88 40435.02 00:07:39.721 00:07:39.721 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.721 ================================================================================= 00:07:39.721 1.00000% : 5772.209us 00:07:39.721 10.00000% : 5999.065us 00:07:39.721 25.00000% : 6225.920us 00:07:39.721 50.00000% : 6604.012us 00:07:39.721 75.00000% : 11443.594us 00:07:39.721 90.00000% : 15526.991us 00:07:39.721 95.00000% : 16938.535us 00:07:39.721 98.00000% : 18652.554us 00:07:39.721 99.00000% : 19358.326us 00:07:39.721 99.50000% : 33070.474us 00:07:39.721 99.90000% : 40128.197us 00:07:39.721 99.99000% : 40531.495us 00:07:39.721 99.99900% : 40531.495us 00:07:39.721 99.99990% : 40531.495us 00:07:39.721 99.99999% : 40531.495us 00:07:39.721 00:07:39.721 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.721 ================================================================================= 00:07:39.721 1.00000% : 5797.415us 00:07:39.721 10.00000% : 5999.065us 00:07:39.721 25.00000% : 6225.920us 00:07:39.721 50.00000% : 6604.012us 00:07:39.721 75.00000% : 11342.769us 00:07:39.721 90.00000% : 15325.342us 00:07:39.721 95.00000% : 16837.711us 00:07:39.721 98.00000% : 18551.729us 00:07:39.721 99.00000% : 19358.326us 00:07:39.721 99.50000% : 32062.228us 00:07:39.721 99.90000% : 39724.898us 00:07:39.721 99.99000% : 40128.197us 00:07:39.721 99.99900% : 40128.197us 00:07:39.721 99.99990% : 40128.197us 00:07:39.721 99.99999% : 40128.197us 00:07:39.721 00:07:39.721 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.721 ================================================================================= 00:07:39.721 1.00000% : 5696.591us 00:07:39.721 10.00000% : 5948.652us 00:07:39.721 25.00000% : 6225.920us 00:07:39.721 50.00000% : 6654.425us 00:07:39.721 75.00000% : 11393.182us 00:07:39.721 90.00000% : 15325.342us 00:07:39.721 95.00000% : 16837.711us 00:07:39.721 98.00000% : 18551.729us 00:07:39.721 99.00000% : 19660.800us 00:07:39.721 99.50000% : 30852.332us 00:07:39.721 99.90000% : 38716.652us 00:07:39.721 99.99000% : 39119.951us 00:07:39.721 99.99900% : 39119.951us 00:07:39.721 99.99990% : 39119.951us 00:07:39.721 99.99999% : 39119.951us 00:07:39.721 00:07:39.721 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.721 ================================================================================= 00:07:39.721 1.00000% : 5772.209us 00:07:39.721 10.00000% : 5999.065us 00:07:39.721 25.00000% : 6225.920us 00:07:39.721 50.00000% : 6604.012us 00:07:39.721 75.00000% : 11241.945us 00:07:39.721 90.00000% : 15325.342us 00:07:39.721 95.00000% : 16736.886us 00:07:39.721 98.00000% : 18551.729us 00:07:39.721 99.00000% : 19156.677us 00:07:39.721 99.50000% : 29037.489us 00:07:39.721 99.90000% : 37506.757us 00:07:39.721 99.99000% : 37910.055us 00:07:39.721 99.99900% : 37910.055us 00:07:39.721 99.99990% : 37910.055us 00:07:39.721 99.99999% : 37910.055us 00:07:39.721 00:07:39.721 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.721 ================================================================================= 00:07:39.721 1.00000% : 5772.209us 00:07:39.721 10.00000% : 5999.065us 00:07:39.721 25.00000% : 6225.920us 00:07:39.721 50.00000% : 6604.012us 00:07:39.721 75.00000% : 11292.357us 00:07:39.721 90.00000% : 15224.517us 00:07:39.721 95.00000% : 16837.711us 00:07:39.721 98.00000% : 18753.378us 00:07:39.721 99.00000% : 19559.975us 00:07:39.721 99.50000% : 28432.542us 00:07:39.721 99.90000% : 36700.160us 00:07:39.721 99.99000% : 37103.458us 00:07:39.721 99.99900% : 37103.458us 00:07:39.721 99.99990% : 37103.458us 00:07:39.721 99.99999% : 37103.458us 00:07:39.721 00:07:39.721 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.721 ================================================================================= 00:07:39.721 1.00000% : 5772.209us 00:07:39.721 10.00000% : 5999.065us 00:07:39.721 25.00000% : 6251.126us 00:07:39.721 50.00000% : 6654.425us 00:07:39.721 75.00000% : 11443.594us 00:07:39.721 90.00000% : 15426.166us 00:07:39.721 95.00000% : 16938.535us 00:07:39.721 98.00000% : 18652.554us 00:07:39.721 99.00000% : 19358.326us 00:07:39.721 99.50000% : 21273.994us 00:07:39.721 99.90000% : 27827.594us 00:07:39.721 99.99000% : 28230.892us 00:07:39.721 99.99900% : 28230.892us 00:07:39.721 99.99990% : 28230.892us 00:07:39.721 99.99999% : 28230.892us 00:07:39.721 00:07:39.721 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.721 ============================================================================== 00:07:39.721 Range in us Cumulative IO count 00:07:39.721 5646.178 - 5671.385: 0.0211% ( 3) 00:07:39.721 5671.385 - 5696.591: 0.0774% ( 8) 00:07:39.721 5696.591 - 5721.797: 0.1619% ( 12) 00:07:39.721 5721.797 - 5747.003: 0.4223% ( 37) 00:07:39.721 5747.003 - 5772.209: 1.0065% ( 83) 00:07:39.721 5772.209 - 5797.415: 1.6892% ( 97) 00:07:39.721 5797.415 - 5822.622: 2.4212% ( 104) 00:07:39.721 5822.622 - 5847.828: 3.3713% ( 135) 00:07:39.721 5847.828 - 5873.034: 4.5467% ( 167) 00:07:39.721 5873.034 - 5898.240: 5.7925% ( 177) 00:07:39.721 5898.240 - 5923.446: 6.9679% ( 167) 00:07:39.722 5923.446 - 5948.652: 8.0659% ( 156) 00:07:39.722 5948.652 - 5973.858: 9.4454% ( 196) 00:07:39.722 5973.858 - 5999.065: 10.8812% ( 204) 00:07:39.722 5999.065 - 6024.271: 12.3522% ( 209) 00:07:39.722 6024.271 - 6049.477: 13.9710% ( 230) 00:07:39.722 6049.477 - 6074.683: 15.6954% ( 245) 00:07:39.722 6074.683 - 6099.889: 17.3072% ( 229) 00:07:39.722 6099.889 - 6125.095: 18.8837% ( 224) 00:07:39.722 6125.095 - 6150.302: 20.4885% ( 228) 00:07:39.722 6150.302 - 6175.508: 22.0721% ( 225) 00:07:39.722 6175.508 - 6200.714: 23.6557% ( 225) 00:07:39.722 6200.714 - 6225.920: 25.2041% ( 220) 00:07:39.722 6225.920 - 6251.126: 26.7736% ( 223) 00:07:39.722 6251.126 - 6276.332: 28.3361% ( 222) 00:07:39.722 6276.332 - 6301.538: 30.0113% ( 238) 00:07:39.722 6301.538 - 6326.745: 31.7708% ( 250) 00:07:39.722 6326.745 - 6351.951: 33.5586% ( 254) 00:07:39.722 6351.951 - 6377.157: 35.3885% ( 260) 00:07:39.722 6377.157 - 6402.363: 37.1903% ( 256) 00:07:39.722 6402.363 - 6427.569: 38.9710% ( 253) 00:07:39.722 6427.569 - 6452.775: 40.7095% ( 247) 00:07:39.722 6452.775 - 6503.188: 44.1371% ( 487) 00:07:39.722 6503.188 - 6553.600: 47.2621% ( 444) 00:07:39.722 6553.600 - 6604.012: 50.0563% ( 397) 00:07:39.722 6604.012 - 6654.425: 52.3156% ( 321) 00:07:39.722 6654.425 - 6704.837: 54.1104% ( 255) 00:07:39.722 6704.837 - 6755.249: 55.4688% ( 193) 00:07:39.722 6755.249 - 6805.662: 56.4893% ( 145) 00:07:39.722 6805.662 - 6856.074: 57.4465% ( 136) 00:07:39.722 6856.074 - 6906.486: 58.2418% ( 113) 00:07:39.722 6906.486 - 6956.898: 58.8964% ( 93) 00:07:39.722 6956.898 - 7007.311: 59.5369% ( 91) 00:07:39.722 7007.311 - 7057.723: 60.0859% ( 78) 00:07:39.722 7057.723 - 7108.135: 60.6771% ( 84) 00:07:39.722 7108.135 - 7158.548: 61.2050% ( 75) 00:07:39.722 7158.548 - 7208.960: 61.5780% ( 53) 00:07:39.722 7208.960 - 7259.372: 61.8666% ( 41) 00:07:39.722 7259.372 - 7309.785: 62.1270% ( 37) 00:07:39.722 7309.785 - 7360.197: 62.3874% ( 37) 00:07:39.722 7360.197 - 7410.609: 62.6760% ( 41) 00:07:39.722 7410.609 - 7461.022: 62.9434% ( 38) 00:07:39.722 7461.022 - 7511.434: 63.1898% ( 35) 00:07:39.722 7511.434 - 7561.846: 63.4431% ( 36) 00:07:39.722 7561.846 - 7612.258: 63.6472% ( 29) 00:07:39.722 7612.258 - 7662.671: 63.8162% ( 24) 00:07:39.722 7662.671 - 7713.083: 63.9710% ( 22) 00:07:39.722 7713.083 - 7763.495: 64.1258% ( 22) 00:07:39.722 7763.495 - 7813.908: 64.2596% ( 19) 00:07:39.722 7813.908 - 7864.320: 64.4003% ( 20) 00:07:39.722 7864.320 - 7914.732: 64.5130% ( 16) 00:07:39.722 7914.732 - 7965.145: 64.6819% ( 24) 00:07:39.722 7965.145 - 8015.557: 64.8367% ( 22) 00:07:39.722 8015.557 - 8065.969: 64.9423% ( 15) 00:07:39.722 8065.969 - 8116.382: 65.0901% ( 21) 00:07:39.722 8116.382 - 8166.794: 65.2168% ( 18) 00:07:39.722 8166.794 - 8217.206: 65.3153% ( 14) 00:07:39.722 8217.206 - 8267.618: 65.3927% ( 11) 00:07:39.722 8267.618 - 8318.031: 65.4631% ( 10) 00:07:39.722 8318.031 - 8368.443: 65.5828% ( 17) 00:07:39.722 8368.443 - 8418.855: 65.7376% ( 22) 00:07:39.722 8418.855 - 8469.268: 65.8291% ( 13) 00:07:39.722 8469.268 - 8519.680: 65.9206% ( 13) 00:07:39.722 8519.680 - 8570.092: 66.0543% ( 19) 00:07:39.722 8570.092 - 8620.505: 66.1810% ( 18) 00:07:39.722 8620.505 - 8670.917: 66.2866% ( 15) 00:07:39.722 8670.917 - 8721.329: 66.3781% ( 13) 00:07:39.722 8721.329 - 8771.742: 66.4907% ( 16) 00:07:39.722 8771.742 - 8822.154: 66.6456% ( 22) 00:07:39.722 8822.154 - 8872.566: 66.8004% ( 22) 00:07:39.722 8872.566 - 8922.978: 66.9975% ( 28) 00:07:39.722 8922.978 - 8973.391: 67.2438% ( 35) 00:07:39.722 8973.391 - 9023.803: 67.4479% ( 29) 00:07:39.722 9023.803 - 9074.215: 67.6098% ( 23) 00:07:39.722 9074.215 - 9124.628: 67.7787% ( 24) 00:07:39.722 9124.628 - 9175.040: 67.9054% ( 18) 00:07:39.722 9175.040 - 9225.452: 68.0954% ( 27) 00:07:39.722 9225.452 - 9275.865: 68.2855% ( 27) 00:07:39.722 9275.865 - 9326.277: 68.4614% ( 25) 00:07:39.722 9326.277 - 9376.689: 68.6444% ( 26) 00:07:39.722 9376.689 - 9427.102: 68.8556% ( 30) 00:07:39.722 9427.102 - 9477.514: 69.0526% ( 28) 00:07:39.722 9477.514 - 9527.926: 69.2356% ( 26) 00:07:39.722 9527.926 - 9578.338: 69.4538% ( 31) 00:07:39.722 9578.338 - 9628.751: 69.6579% ( 29) 00:07:39.722 9628.751 - 9679.163: 69.8269% ( 24) 00:07:39.722 9679.163 - 9729.575: 69.9747% ( 21) 00:07:39.722 9729.575 - 9779.988: 70.2210% ( 35) 00:07:39.722 9779.988 - 9830.400: 70.3758% ( 22) 00:07:39.722 9830.400 - 9880.812: 70.5377% ( 23) 00:07:39.722 9880.812 - 9931.225: 70.6926% ( 22) 00:07:39.722 9931.225 - 9981.637: 70.8615% ( 24) 00:07:39.722 9981.637 - 10032.049: 71.0304% ( 24) 00:07:39.722 10032.049 - 10082.462: 71.1923% ( 23) 00:07:39.722 10082.462 - 10132.874: 71.3542% ( 23) 00:07:39.722 10132.874 - 10183.286: 71.5935% ( 34) 00:07:39.722 10183.286 - 10233.698: 71.7624% ( 24) 00:07:39.722 10233.698 - 10284.111: 71.8961% ( 19) 00:07:39.722 10284.111 - 10334.523: 72.0510% ( 22) 00:07:39.722 10334.523 - 10384.935: 72.1988% ( 21) 00:07:39.722 10384.935 - 10435.348: 72.3114% ( 16) 00:07:39.722 10435.348 - 10485.760: 72.4662% ( 22) 00:07:39.722 10485.760 - 10536.172: 72.6351% ( 24) 00:07:39.722 10536.172 - 10586.585: 72.7548% ( 17) 00:07:39.722 10586.585 - 10636.997: 72.9096% ( 22) 00:07:39.722 10636.997 - 10687.409: 73.0856% ( 25) 00:07:39.722 10687.409 - 10737.822: 73.2264% ( 20) 00:07:39.722 10737.822 - 10788.234: 73.4023% ( 25) 00:07:39.722 10788.234 - 10838.646: 73.5501% ( 21) 00:07:39.722 10838.646 - 10889.058: 73.7050% ( 22) 00:07:39.722 10889.058 - 10939.471: 73.8809% ( 25) 00:07:39.722 10939.471 - 10989.883: 74.0076% ( 18) 00:07:39.722 10989.883 - 11040.295: 74.1343% ( 18) 00:07:39.722 11040.295 - 11090.708: 74.2610% ( 18) 00:07:39.722 11090.708 - 11141.120: 74.3947% ( 19) 00:07:39.722 11141.120 - 11191.532: 74.5003% ( 15) 00:07:39.722 11191.532 - 11241.945: 74.6340% ( 19) 00:07:39.722 11241.945 - 11292.357: 74.7748% ( 20) 00:07:39.722 11292.357 - 11342.769: 74.8733% ( 14) 00:07:39.722 11342.769 - 11393.182: 74.9578% ( 12) 00:07:39.722 11393.182 - 11443.594: 75.0633% ( 15) 00:07:39.722 11443.594 - 11494.006: 75.1548% ( 13) 00:07:39.722 11494.006 - 11544.418: 75.2604% ( 15) 00:07:39.722 11544.418 - 11594.831: 75.3730% ( 16) 00:07:39.722 11594.831 - 11645.243: 75.4997% ( 18) 00:07:39.722 11645.243 - 11695.655: 75.6053% ( 15) 00:07:39.722 11695.655 - 11746.068: 75.7109% ( 15) 00:07:39.722 11746.068 - 11796.480: 75.8235% ( 16) 00:07:39.722 11796.480 - 11846.892: 75.9361% ( 16) 00:07:39.722 11846.892 - 11897.305: 76.0346% ( 14) 00:07:39.722 11897.305 - 11947.717: 76.1402% ( 15) 00:07:39.722 11947.717 - 11998.129: 76.2599% ( 17) 00:07:39.722 11998.129 - 12048.542: 76.4147% ( 22) 00:07:39.722 12048.542 - 12098.954: 76.5766% ( 23) 00:07:39.722 12098.954 - 12149.366: 76.7385% ( 23) 00:07:39.722 12149.366 - 12199.778: 76.8581% ( 17) 00:07:39.722 12199.778 - 12250.191: 77.0341% ( 25) 00:07:39.722 12250.191 - 12300.603: 77.1678% ( 19) 00:07:39.722 12300.603 - 12351.015: 77.3508% ( 26) 00:07:39.722 12351.015 - 12401.428: 77.4845% ( 19) 00:07:39.722 12401.428 - 12451.840: 77.6112% ( 18) 00:07:39.722 12451.840 - 12502.252: 77.7309% ( 17) 00:07:39.722 12502.252 - 12552.665: 77.8364% ( 15) 00:07:39.722 12552.665 - 12603.077: 77.9561% ( 17) 00:07:39.722 12603.077 - 12653.489: 78.0687% ( 16) 00:07:39.722 12653.489 - 12703.902: 78.1813% ( 16) 00:07:39.722 12703.902 - 12754.314: 78.3150% ( 19) 00:07:39.722 12754.314 - 12804.726: 78.4628% ( 21) 00:07:39.722 12804.726 - 12855.138: 78.6036% ( 20) 00:07:39.722 12855.138 - 12905.551: 78.7936% ( 27) 00:07:39.722 12905.551 - 13006.375: 79.2441% ( 64) 00:07:39.722 13006.375 - 13107.200: 79.7649% ( 74) 00:07:39.722 13107.200 - 13208.025: 80.1028% ( 48) 00:07:39.722 13208.025 - 13308.849: 80.4969% ( 56) 00:07:39.722 13308.849 - 13409.674: 81.0037% ( 72) 00:07:39.722 13409.674 - 13510.498: 81.4611% ( 65) 00:07:39.722 13510.498 - 13611.323: 81.8905% ( 61) 00:07:39.722 13611.323 - 13712.148: 82.2565% ( 52) 00:07:39.722 13712.148 - 13812.972: 82.6506% ( 56) 00:07:39.722 13812.972 - 13913.797: 83.0518% ( 57) 00:07:39.722 13913.797 - 14014.622: 83.4178% ( 52) 00:07:39.722 14014.622 - 14115.446: 83.7979% ( 54) 00:07:39.722 14115.446 - 14216.271: 84.1990% ( 57) 00:07:39.722 14216.271 - 14317.095: 84.7199% ( 74) 00:07:39.722 14317.095 - 14417.920: 85.2477% ( 75) 00:07:39.722 14417.920 - 14518.745: 85.7615% ( 73) 00:07:39.722 14518.745 - 14619.569: 86.3739% ( 87) 00:07:39.722 14619.569 - 14720.394: 86.9088% ( 76) 00:07:39.722 14720.394 - 14821.218: 87.3733% ( 66) 00:07:39.722 14821.218 - 14922.043: 87.8097% ( 62) 00:07:39.722 14922.043 - 15022.868: 88.2883% ( 68) 00:07:39.722 15022.868 - 15123.692: 88.7247% ( 62) 00:07:39.722 15123.692 - 15224.517: 89.1540% ( 61) 00:07:39.722 15224.517 - 15325.342: 89.5130% ( 51) 00:07:39.722 15325.342 - 15426.166: 89.9986% ( 69) 00:07:39.722 15426.166 - 15526.991: 90.4702% ( 67) 00:07:39.722 15526.991 - 15627.815: 91.0473% ( 82) 00:07:39.722 15627.815 - 15728.640: 91.5892% ( 77) 00:07:39.722 15728.640 - 15829.465: 92.1312% ( 77) 00:07:39.722 15829.465 - 15930.289: 92.5605% ( 61) 00:07:39.722 15930.289 - 16031.114: 93.0039% ( 63) 00:07:39.722 16031.114 - 16131.938: 93.4262% ( 60) 00:07:39.722 16131.938 - 16232.763: 93.8204% ( 56) 00:07:39.722 16232.763 - 16333.588: 94.1582% ( 48) 00:07:39.722 16333.588 - 16434.412: 94.3764% ( 31) 00:07:39.722 16434.412 - 16535.237: 94.4749% ( 14) 00:07:39.722 16535.237 - 16636.062: 94.6016% ( 18) 00:07:39.723 16636.062 - 16736.886: 94.7635% ( 23) 00:07:39.723 16736.886 - 16837.711: 94.8902% ( 18) 00:07:39.723 16837.711 - 16938.535: 95.0310% ( 20) 00:07:39.723 16938.535 - 17039.360: 95.2210% ( 27) 00:07:39.723 17039.360 - 17140.185: 95.4533% ( 33) 00:07:39.723 17140.185 - 17241.009: 95.6715% ( 31) 00:07:39.723 17241.009 - 17341.834: 95.9248% ( 36) 00:07:39.723 17341.834 - 17442.658: 96.2204% ( 42) 00:07:39.723 17442.658 - 17543.483: 96.5090% ( 41) 00:07:39.723 17543.483 - 17644.308: 96.7624% ( 36) 00:07:39.723 17644.308 - 17745.132: 96.9947% ( 33) 00:07:39.723 17745.132 - 17845.957: 97.1706% ( 25) 00:07:39.723 17845.957 - 17946.782: 97.3466% ( 25) 00:07:39.723 17946.782 - 18047.606: 97.4592% ( 16) 00:07:39.723 18047.606 - 18148.431: 97.5436% ( 12) 00:07:39.723 18148.431 - 18249.255: 97.6211% ( 11) 00:07:39.723 18249.255 - 18350.080: 97.7126% ( 13) 00:07:39.723 18350.080 - 18450.905: 97.8111% ( 14) 00:07:39.723 18450.905 - 18551.729: 97.9730% ( 23) 00:07:39.723 18551.729 - 18652.554: 98.1560% ( 26) 00:07:39.723 18652.554 - 18753.378: 98.2615% ( 15) 00:07:39.723 18753.378 - 18854.203: 98.4093% ( 21) 00:07:39.723 18854.203 - 18955.028: 98.5290% ( 17) 00:07:39.723 18955.028 - 19055.852: 98.6486% ( 17) 00:07:39.723 19055.852 - 19156.677: 98.7824% ( 19) 00:07:39.723 19156.677 - 19257.502: 98.9161% ( 19) 00:07:39.723 19257.502 - 19358.326: 99.0358% ( 17) 00:07:39.723 19358.326 - 19459.151: 99.0850% ( 7) 00:07:39.723 19459.151 - 19559.975: 99.0991% ( 2) 00:07:39.723 31658.929 - 31860.578: 99.1554% ( 8) 00:07:39.723 31860.578 - 32062.228: 99.2117% ( 8) 00:07:39.723 32062.228 - 32263.877: 99.2751% ( 9) 00:07:39.723 32263.877 - 32465.526: 99.3243% ( 7) 00:07:39.723 32465.526 - 32667.175: 99.3806% ( 8) 00:07:39.723 32667.175 - 32868.825: 99.4440% ( 9) 00:07:39.723 32868.825 - 33070.474: 99.5003% ( 8) 00:07:39.723 33070.474 - 33272.123: 99.5495% ( 7) 00:07:39.723 38918.302 - 39119.951: 99.6059% ( 8) 00:07:39.723 39119.951 - 39321.600: 99.6692% ( 9) 00:07:39.723 39321.600 - 39523.249: 99.7255% ( 8) 00:07:39.723 39523.249 - 39724.898: 99.7889% ( 9) 00:07:39.723 39724.898 - 39926.548: 99.8452% ( 8) 00:07:39.723 39926.548 - 40128.197: 99.9085% ( 9) 00:07:39.723 40128.197 - 40329.846: 99.9718% ( 9) 00:07:39.723 40329.846 - 40531.495: 100.0000% ( 4) 00:07:39.723 00:07:39.723 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.723 ============================================================================== 00:07:39.723 Range in us Cumulative IO count 00:07:39.723 5595.766 - 5620.972: 0.0070% ( 1) 00:07:39.723 5620.972 - 5646.178: 0.0211% ( 2) 00:07:39.723 5646.178 - 5671.385: 0.0352% ( 2) 00:07:39.723 5671.385 - 5696.591: 0.0704% ( 5) 00:07:39.723 5696.591 - 5721.797: 0.1971% ( 18) 00:07:39.723 5721.797 - 5747.003: 0.5138% ( 45) 00:07:39.723 5747.003 - 5772.209: 0.9220% ( 58) 00:07:39.723 5772.209 - 5797.415: 1.6751% ( 107) 00:07:39.723 5797.415 - 5822.622: 2.6675% ( 141) 00:07:39.723 5822.622 - 5847.828: 3.7655% ( 156) 00:07:39.723 5847.828 - 5873.034: 4.7016% ( 133) 00:07:39.723 5873.034 - 5898.240: 5.7503% ( 149) 00:07:39.723 5898.240 - 5923.446: 7.0172% ( 180) 00:07:39.723 5923.446 - 5948.652: 8.3404% ( 188) 00:07:39.723 5948.652 - 5973.858: 9.7128% ( 195) 00:07:39.723 5973.858 - 5999.065: 11.2472% ( 218) 00:07:39.723 5999.065 - 6024.271: 12.7393% ( 212) 00:07:39.723 6024.271 - 6049.477: 14.2103% ( 209) 00:07:39.723 6049.477 - 6074.683: 15.7447% ( 218) 00:07:39.723 6074.683 - 6099.889: 17.2790% ( 218) 00:07:39.723 6099.889 - 6125.095: 18.7711% ( 212) 00:07:39.723 6125.095 - 6150.302: 20.3618% ( 226) 00:07:39.723 6150.302 - 6175.508: 22.0158% ( 235) 00:07:39.723 6175.508 - 6200.714: 23.5712% ( 221) 00:07:39.723 6200.714 - 6225.920: 25.2041% ( 232) 00:07:39.723 6225.920 - 6251.126: 26.8581% ( 235) 00:07:39.723 6251.126 - 6276.332: 28.5473% ( 240) 00:07:39.723 6276.332 - 6301.538: 30.3280% ( 253) 00:07:39.723 6301.538 - 6326.745: 32.1016% ( 252) 00:07:39.723 6326.745 - 6351.951: 33.8260% ( 245) 00:07:39.723 6351.951 - 6377.157: 35.5152% ( 240) 00:07:39.723 6377.157 - 6402.363: 37.2889% ( 252) 00:07:39.723 6402.363 - 6427.569: 39.0132% ( 245) 00:07:39.723 6427.569 - 6452.775: 40.8080% ( 255) 00:07:39.723 6452.775 - 6503.188: 44.2638% ( 491) 00:07:39.723 6503.188 - 6553.600: 47.4733% ( 456) 00:07:39.723 6553.600 - 6604.012: 50.1900% ( 386) 00:07:39.723 6604.012 - 6654.425: 52.4352% ( 319) 00:07:39.723 6654.425 - 6704.837: 54.2089% ( 252) 00:07:39.723 6704.837 - 6755.249: 55.5602% ( 192) 00:07:39.723 6755.249 - 6805.662: 56.5386% ( 139) 00:07:39.723 6805.662 - 6856.074: 57.3057% ( 109) 00:07:39.723 6856.074 - 6906.486: 57.9673% ( 94) 00:07:39.723 6906.486 - 6956.898: 58.6008% ( 90) 00:07:39.723 6956.898 - 7007.311: 59.2061% ( 86) 00:07:39.723 7007.311 - 7057.723: 59.7340% ( 75) 00:07:39.723 7057.723 - 7108.135: 60.2407% ( 72) 00:07:39.723 7108.135 - 7158.548: 60.6067% ( 52) 00:07:39.723 7158.548 - 7208.960: 60.9164% ( 44) 00:07:39.723 7208.960 - 7259.372: 61.1979% ( 40) 00:07:39.723 7259.372 - 7309.785: 61.4583% ( 37) 00:07:39.723 7309.785 - 7360.197: 61.6695% ( 30) 00:07:39.723 7360.197 - 7410.609: 61.9369% ( 38) 00:07:39.723 7410.609 - 7461.022: 62.1903% ( 36) 00:07:39.723 7461.022 - 7511.434: 62.4789% ( 41) 00:07:39.723 7511.434 - 7561.846: 62.7886% ( 44) 00:07:39.723 7561.846 - 7612.258: 63.0701% ( 40) 00:07:39.723 7612.258 - 7662.671: 63.3446% ( 39) 00:07:39.723 7662.671 - 7713.083: 63.5769% ( 33) 00:07:39.723 7713.083 - 7763.495: 63.8514% ( 39) 00:07:39.723 7763.495 - 7813.908: 64.0695% ( 31) 00:07:39.723 7813.908 - 7864.320: 64.2807% ( 30) 00:07:39.723 7864.320 - 7914.732: 64.4707% ( 27) 00:07:39.723 7914.732 - 7965.145: 64.6396% ( 24) 00:07:39.723 7965.145 - 8015.557: 64.8226% ( 26) 00:07:39.723 8015.557 - 8065.969: 64.9986% ( 25) 00:07:39.723 8065.969 - 8116.382: 65.1394% ( 20) 00:07:39.723 8116.382 - 8166.794: 65.2872% ( 21) 00:07:39.723 8166.794 - 8217.206: 65.4279% ( 20) 00:07:39.723 8217.206 - 8267.618: 65.5898% ( 23) 00:07:39.723 8267.618 - 8318.031: 65.7165% ( 18) 00:07:39.723 8318.031 - 8368.443: 65.8361% ( 17) 00:07:39.723 8368.443 - 8418.855: 65.9558% ( 17) 00:07:39.723 8418.855 - 8469.268: 66.0895% ( 19) 00:07:39.723 8469.268 - 8519.680: 66.2162% ( 18) 00:07:39.723 8519.680 - 8570.092: 66.2936% ( 11) 00:07:39.723 8570.092 - 8620.505: 66.3922% ( 14) 00:07:39.723 8620.505 - 8670.917: 66.4837% ( 13) 00:07:39.723 8670.917 - 8721.329: 66.5752% ( 13) 00:07:39.723 8721.329 - 8771.742: 66.6737% ( 14) 00:07:39.723 8771.742 - 8822.154: 66.7863% ( 16) 00:07:39.723 8822.154 - 8872.566: 66.9200% ( 19) 00:07:39.723 8872.566 - 8922.978: 67.0467% ( 18) 00:07:39.723 8922.978 - 8973.391: 67.1593% ( 16) 00:07:39.723 8973.391 - 9023.803: 67.3072% ( 21) 00:07:39.723 9023.803 - 9074.215: 67.4620% ( 22) 00:07:39.723 9074.215 - 9124.628: 67.6239% ( 23) 00:07:39.723 9124.628 - 9175.040: 67.8139% ( 27) 00:07:39.723 9175.040 - 9225.452: 68.0814% ( 38) 00:07:39.723 9225.452 - 9275.865: 68.2855% ( 29) 00:07:39.723 9275.865 - 9326.277: 68.4966% ( 30) 00:07:39.723 9326.277 - 9376.689: 68.7218% ( 32) 00:07:39.723 9376.689 - 9427.102: 68.9611% ( 34) 00:07:39.723 9427.102 - 9477.514: 69.1723% ( 30) 00:07:39.723 9477.514 - 9527.926: 69.3834% ( 30) 00:07:39.723 9527.926 - 9578.338: 69.5876% ( 29) 00:07:39.723 9578.338 - 9628.751: 69.8057% ( 31) 00:07:39.723 9628.751 - 9679.163: 70.0099% ( 29) 00:07:39.723 9679.163 - 9729.575: 70.1999% ( 27) 00:07:39.723 9729.575 - 9779.988: 70.3407% ( 20) 00:07:39.723 9779.988 - 9830.400: 70.4744% ( 19) 00:07:39.723 9830.400 - 9880.812: 70.6151% ( 20) 00:07:39.723 9880.812 - 9931.225: 70.7559% ( 20) 00:07:39.723 9931.225 - 9981.637: 70.9108% ( 22) 00:07:39.723 9981.637 - 10032.049: 71.0023% ( 13) 00:07:39.723 10032.049 - 10082.462: 71.0586% ( 8) 00:07:39.723 10082.462 - 10132.874: 71.1219% ( 9) 00:07:39.723 10132.874 - 10183.286: 71.2345% ( 16) 00:07:39.723 10183.286 - 10233.698: 71.3753% ( 20) 00:07:39.723 10233.698 - 10284.111: 71.4949% ( 17) 00:07:39.723 10284.111 - 10334.523: 71.6568% ( 23) 00:07:39.723 10334.523 - 10384.935: 71.8609% ( 29) 00:07:39.723 10384.935 - 10435.348: 72.0298% ( 24) 00:07:39.723 10435.348 - 10485.760: 72.1706% ( 20) 00:07:39.723 10485.760 - 10536.172: 72.4240% ( 36) 00:07:39.723 10536.172 - 10586.585: 72.6492% ( 32) 00:07:39.723 10586.585 - 10636.997: 72.8252% ( 25) 00:07:39.723 10636.997 - 10687.409: 72.9730% ( 21) 00:07:39.723 10687.409 - 10737.822: 73.1489% ( 25) 00:07:39.723 10737.822 - 10788.234: 73.3038% ( 22) 00:07:39.723 10788.234 - 10838.646: 73.4586% ( 22) 00:07:39.723 10838.646 - 10889.058: 73.6416% ( 26) 00:07:39.723 10889.058 - 10939.471: 73.8176% ( 25) 00:07:39.723 10939.471 - 10989.883: 73.9794% ( 23) 00:07:39.723 10989.883 - 11040.295: 74.1343% ( 22) 00:07:39.723 11040.295 - 11090.708: 74.3102% ( 25) 00:07:39.723 11090.708 - 11141.120: 74.4792% ( 24) 00:07:39.723 11141.120 - 11191.532: 74.6270% ( 21) 00:07:39.723 11191.532 - 11241.945: 74.8029% ( 25) 00:07:39.723 11241.945 - 11292.357: 74.9507% ( 21) 00:07:39.723 11292.357 - 11342.769: 75.0704% ( 17) 00:07:39.723 11342.769 - 11393.182: 75.1619% ( 13) 00:07:39.723 11393.182 - 11443.594: 75.2956% ( 19) 00:07:39.723 11443.594 - 11494.006: 75.4012% ( 15) 00:07:39.723 11494.006 - 11544.418: 75.5349% ( 19) 00:07:39.723 11544.418 - 11594.831: 75.6757% ( 20) 00:07:39.723 11594.831 - 11645.243: 75.8024% ( 18) 00:07:39.724 11645.243 - 11695.655: 75.9220% ( 17) 00:07:39.724 11695.655 - 11746.068: 76.0487% ( 18) 00:07:39.724 11746.068 - 11796.480: 76.1543% ( 15) 00:07:39.724 11796.480 - 11846.892: 76.2458% ( 13) 00:07:39.724 11846.892 - 11897.305: 76.3373% ( 13) 00:07:39.724 11897.305 - 11947.717: 76.4780% ( 20) 00:07:39.724 11947.717 - 11998.129: 76.5907% ( 16) 00:07:39.724 11998.129 - 12048.542: 76.7244% ( 19) 00:07:39.724 12048.542 - 12098.954: 76.8792% ( 22) 00:07:39.724 12098.954 - 12149.366: 77.0341% ( 22) 00:07:39.724 12149.366 - 12199.778: 77.1959% ( 23) 00:07:39.724 12199.778 - 12250.191: 77.3578% ( 23) 00:07:39.724 12250.191 - 12300.603: 77.5127% ( 22) 00:07:39.724 12300.603 - 12351.015: 77.6394% ( 18) 00:07:39.724 12351.015 - 12401.428: 77.8153% ( 25) 00:07:39.724 12401.428 - 12451.840: 77.9490% ( 19) 00:07:39.724 12451.840 - 12502.252: 78.0828% ( 19) 00:07:39.724 12502.252 - 12552.665: 78.2095% ( 18) 00:07:39.724 12552.665 - 12603.077: 78.3573% ( 21) 00:07:39.724 12603.077 - 12653.489: 78.4910% ( 19) 00:07:39.724 12653.489 - 12703.902: 78.6388% ( 21) 00:07:39.724 12703.902 - 12754.314: 78.7796% ( 20) 00:07:39.724 12754.314 - 12804.726: 78.9344% ( 22) 00:07:39.724 12804.726 - 12855.138: 79.0329% ( 14) 00:07:39.724 12855.138 - 12905.551: 79.1244% ( 13) 00:07:39.724 12905.551 - 13006.375: 79.3004% ( 25) 00:07:39.724 13006.375 - 13107.200: 79.4693% ( 24) 00:07:39.724 13107.200 - 13208.025: 79.7086% ( 34) 00:07:39.724 13208.025 - 13308.849: 80.1098% ( 57) 00:07:39.724 13308.849 - 13409.674: 80.4899% ( 54) 00:07:39.724 13409.674 - 13510.498: 80.8418% ( 50) 00:07:39.724 13510.498 - 13611.323: 81.2641% ( 60) 00:07:39.724 13611.323 - 13712.148: 81.7005% ( 62) 00:07:39.724 13712.148 - 13812.972: 82.1157% ( 59) 00:07:39.724 13812.972 - 13913.797: 82.5943% ( 68) 00:07:39.724 13913.797 - 14014.622: 83.1363% ( 77) 00:07:39.724 14014.622 - 14115.446: 83.7275% ( 84) 00:07:39.724 14115.446 - 14216.271: 84.3680% ( 91) 00:07:39.724 14216.271 - 14317.095: 84.9521% ( 83) 00:07:39.724 14317.095 - 14417.920: 85.5363% ( 83) 00:07:39.724 14417.920 - 14518.745: 86.0149% ( 68) 00:07:39.724 14518.745 - 14619.569: 86.6132% ( 85) 00:07:39.724 14619.569 - 14720.394: 87.1974% ( 83) 00:07:39.724 14720.394 - 14821.218: 87.7675% ( 81) 00:07:39.724 14821.218 - 14922.043: 88.2812% ( 73) 00:07:39.724 14922.043 - 15022.868: 88.7599% ( 68) 00:07:39.724 15022.868 - 15123.692: 89.2455% ( 69) 00:07:39.724 15123.692 - 15224.517: 89.7523% ( 72) 00:07:39.724 15224.517 - 15325.342: 90.2027% ( 64) 00:07:39.724 15325.342 - 15426.166: 90.5968% ( 56) 00:07:39.724 15426.166 - 15526.991: 90.9276% ( 47) 00:07:39.724 15526.991 - 15627.815: 91.2373% ( 44) 00:07:39.724 15627.815 - 15728.640: 91.5118% ( 39) 00:07:39.724 15728.640 - 15829.465: 91.7793% ( 38) 00:07:39.724 15829.465 - 15930.289: 92.1030% ( 46) 00:07:39.724 15930.289 - 16031.114: 92.4338% ( 47) 00:07:39.724 16031.114 - 16131.938: 92.8491% ( 59) 00:07:39.724 16131.938 - 16232.763: 93.1729% ( 46) 00:07:39.724 16232.763 - 16333.588: 93.5107% ( 48) 00:07:39.724 16333.588 - 16434.412: 93.8626% ( 50) 00:07:39.724 16434.412 - 16535.237: 94.2286% ( 52) 00:07:39.724 16535.237 - 16636.062: 94.5946% ( 52) 00:07:39.724 16636.062 - 16736.886: 94.9395% ( 49) 00:07:39.724 16736.886 - 16837.711: 95.3195% ( 54) 00:07:39.724 16837.711 - 16938.535: 95.6574% ( 48) 00:07:39.724 16938.535 - 17039.360: 95.8967% ( 34) 00:07:39.724 17039.360 - 17140.185: 96.0797% ( 26) 00:07:39.724 17140.185 - 17241.009: 96.2979% ( 31) 00:07:39.724 17241.009 - 17341.834: 96.5231% ( 32) 00:07:39.724 17341.834 - 17442.658: 96.7413% ( 31) 00:07:39.724 17442.658 - 17543.483: 96.9313% ( 27) 00:07:39.724 17543.483 - 17644.308: 97.1073% ( 25) 00:07:39.724 17644.308 - 17745.132: 97.2621% ( 22) 00:07:39.724 17745.132 - 17845.957: 97.3536% ( 13) 00:07:39.724 17845.957 - 17946.782: 97.4381% ( 12) 00:07:39.724 17946.782 - 18047.606: 97.5225% ( 12) 00:07:39.724 18047.606 - 18148.431: 97.5999% ( 11) 00:07:39.724 18148.431 - 18249.255: 97.6844% ( 12) 00:07:39.724 18249.255 - 18350.080: 97.7829% ( 14) 00:07:39.724 18350.080 - 18450.905: 97.9167% ( 19) 00:07:39.724 18450.905 - 18551.729: 98.0434% ( 18) 00:07:39.724 18551.729 - 18652.554: 98.1841% ( 20) 00:07:39.724 18652.554 - 18753.378: 98.3249% ( 20) 00:07:39.724 18753.378 - 18854.203: 98.4586% ( 19) 00:07:39.724 18854.203 - 18955.028: 98.5853% ( 18) 00:07:39.724 18955.028 - 19055.852: 98.7050% ( 17) 00:07:39.724 19055.852 - 19156.677: 98.8387% ( 19) 00:07:39.724 19156.677 - 19257.502: 98.9583% ( 17) 00:07:39.724 19257.502 - 19358.326: 99.0639% ( 15) 00:07:39.724 19358.326 - 19459.151: 99.0991% ( 5) 00:07:39.724 30449.034 - 30650.683: 99.1273% ( 4) 00:07:39.724 30650.683 - 30852.332: 99.1906% ( 9) 00:07:39.724 30852.332 - 31053.982: 99.2469% ( 8) 00:07:39.724 31053.982 - 31255.631: 99.3102% ( 9) 00:07:39.724 31255.631 - 31457.280: 99.3666% ( 8) 00:07:39.724 31457.280 - 31658.929: 99.4229% ( 8) 00:07:39.724 31658.929 - 31860.578: 99.4792% ( 8) 00:07:39.724 31860.578 - 32062.228: 99.5425% ( 9) 00:07:39.724 32062.228 - 32263.877: 99.5495% ( 1) 00:07:39.724 38111.705 - 38313.354: 99.5566% ( 1) 00:07:39.724 38313.354 - 38515.003: 99.5918% ( 5) 00:07:39.724 38515.003 - 38716.652: 99.6340% ( 6) 00:07:39.724 38716.652 - 38918.302: 99.6762% ( 6) 00:07:39.724 38918.302 - 39119.951: 99.7325% ( 8) 00:07:39.724 39119.951 - 39321.600: 99.7959% ( 9) 00:07:39.724 39321.600 - 39523.249: 99.8592% ( 9) 00:07:39.724 39523.249 - 39724.898: 99.9155% ( 8) 00:07:39.724 39724.898 - 39926.548: 99.9789% ( 9) 00:07:39.724 39926.548 - 40128.197: 100.0000% ( 3) 00:07:39.724 00:07:39.724 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.724 ============================================================================== 00:07:39.724 Range in us Cumulative IO count 00:07:39.724 5545.354 - 5570.560: 0.0070% ( 1) 00:07:39.724 5570.560 - 5595.766: 0.0422% ( 5) 00:07:39.724 5595.766 - 5620.972: 0.1619% ( 17) 00:07:39.724 5620.972 - 5646.178: 0.4505% ( 41) 00:07:39.724 5646.178 - 5671.385: 0.7601% ( 44) 00:07:39.724 5671.385 - 5696.591: 1.2317% ( 67) 00:07:39.724 5696.591 - 5721.797: 1.8018% ( 81) 00:07:39.724 5721.797 - 5747.003: 2.4986% ( 99) 00:07:39.724 5747.003 - 5772.209: 3.2869% ( 112) 00:07:39.724 5772.209 - 5797.415: 4.1033% ( 116) 00:07:39.724 5797.415 - 5822.622: 5.0676% ( 137) 00:07:39.724 5822.622 - 5847.828: 6.1374% ( 152) 00:07:39.724 5847.828 - 5873.034: 7.3128% ( 167) 00:07:39.724 5873.034 - 5898.240: 8.5304% ( 173) 00:07:39.724 5898.240 - 5923.446: 9.7128% ( 168) 00:07:39.724 5923.446 - 5948.652: 11.0079% ( 184) 00:07:39.724 5948.652 - 5973.858: 12.1762% ( 166) 00:07:39.724 5973.858 - 5999.065: 13.5487% ( 195) 00:07:39.724 5999.065 - 6024.271: 14.8719% ( 188) 00:07:39.724 6024.271 - 6049.477: 16.1177% ( 177) 00:07:39.724 6049.477 - 6074.683: 17.3705% ( 178) 00:07:39.724 6074.683 - 6099.889: 18.7007% ( 189) 00:07:39.724 6099.889 - 6125.095: 19.9535% ( 178) 00:07:39.724 6125.095 - 6150.302: 21.3823% ( 203) 00:07:39.724 6150.302 - 6175.508: 22.7618% ( 196) 00:07:39.724 6175.508 - 6200.714: 24.2117% ( 206) 00:07:39.724 6200.714 - 6225.920: 25.7531% ( 219) 00:07:39.724 6225.920 - 6251.126: 27.1396% ( 197) 00:07:39.724 6251.126 - 6276.332: 28.6529% ( 215) 00:07:39.724 6276.332 - 6301.538: 30.1028% ( 206) 00:07:39.724 6301.538 - 6326.745: 31.5034% ( 199) 00:07:39.724 6326.745 - 6351.951: 32.9744% ( 209) 00:07:39.724 6351.951 - 6377.157: 34.4595% ( 211) 00:07:39.724 6377.157 - 6402.363: 35.9657% ( 214) 00:07:39.724 6402.363 - 6427.569: 37.3874% ( 202) 00:07:39.724 6427.569 - 6452.775: 39.0203% ( 232) 00:07:39.724 6452.775 - 6503.188: 42.0045% ( 424) 00:07:39.724 6503.188 - 6553.600: 45.0591% ( 434) 00:07:39.724 6553.600 - 6604.012: 47.9167% ( 406) 00:07:39.724 6604.012 - 6654.425: 50.3730% ( 349) 00:07:39.724 6654.425 - 6704.837: 52.5760% ( 313) 00:07:39.724 6704.837 - 6755.249: 54.3708% ( 255) 00:07:39.724 6755.249 - 6805.662: 55.8699% ( 213) 00:07:39.724 6805.662 - 6856.074: 57.0172% ( 163) 00:07:39.724 6856.074 - 6906.486: 57.7773% ( 108) 00:07:39.724 6906.486 - 6956.898: 58.3896% ( 87) 00:07:39.724 6956.898 - 7007.311: 58.9245% ( 76) 00:07:39.724 7007.311 - 7057.723: 59.4313% ( 72) 00:07:39.724 7057.723 - 7108.135: 59.9733% ( 77) 00:07:39.724 7108.135 - 7158.548: 60.4448% ( 67) 00:07:39.724 7158.548 - 7208.960: 60.7897% ( 49) 00:07:39.724 7208.960 - 7259.372: 61.1486% ( 51) 00:07:39.724 7259.372 - 7309.785: 61.4372% ( 41) 00:07:39.724 7309.785 - 7360.197: 61.6906% ( 36) 00:07:39.724 7360.197 - 7410.609: 61.9721% ( 40) 00:07:39.724 7410.609 - 7461.022: 62.2185% ( 35) 00:07:39.724 7461.022 - 7511.434: 62.3803% ( 23) 00:07:39.724 7511.434 - 7561.846: 62.5985% ( 31) 00:07:39.724 7561.846 - 7612.258: 62.8590% ( 37) 00:07:39.724 7612.258 - 7662.671: 63.0771% ( 31) 00:07:39.724 7662.671 - 7713.083: 63.2812% ( 29) 00:07:39.724 7713.083 - 7763.495: 63.4361% ( 22) 00:07:39.724 7763.495 - 7813.908: 63.6472% ( 30) 00:07:39.724 7813.908 - 7864.320: 63.8373% ( 27) 00:07:39.724 7864.320 - 7914.732: 64.0414% ( 29) 00:07:39.724 7914.732 - 7965.145: 64.2244% ( 26) 00:07:39.724 7965.145 - 8015.557: 64.4144% ( 27) 00:07:39.724 8015.557 - 8065.969: 64.5833% ( 24) 00:07:39.724 8065.969 - 8116.382: 64.7311% ( 21) 00:07:39.724 8116.382 - 8166.794: 64.8930% ( 23) 00:07:39.724 8166.794 - 8217.206: 65.0338% ( 20) 00:07:39.724 8217.206 - 8267.618: 65.1957% ( 23) 00:07:39.725 8267.618 - 8318.031: 65.3646% ( 24) 00:07:39.725 8318.031 - 8368.443: 65.5546% ( 27) 00:07:39.725 8368.443 - 8418.855: 65.7376% ( 26) 00:07:39.725 8418.855 - 8469.268: 65.9136% ( 25) 00:07:39.725 8469.268 - 8519.680: 66.1458% ( 33) 00:07:39.725 8519.680 - 8570.092: 66.3288% ( 26) 00:07:39.725 8570.092 - 8620.505: 66.4907% ( 23) 00:07:39.725 8620.505 - 8670.917: 66.6878% ( 28) 00:07:39.725 8670.917 - 8721.329: 66.9341% ( 35) 00:07:39.725 8721.329 - 8771.742: 67.1453% ( 30) 00:07:39.725 8771.742 - 8822.154: 67.3142% ( 24) 00:07:39.725 8822.154 - 8872.566: 67.4901% ( 25) 00:07:39.725 8872.566 - 8922.978: 67.6661% ( 25) 00:07:39.725 8922.978 - 8973.391: 67.8350% ( 24) 00:07:39.725 8973.391 - 9023.803: 68.0039% ( 24) 00:07:39.725 9023.803 - 9074.215: 68.1658% ( 23) 00:07:39.725 9074.215 - 9124.628: 68.3981% ( 33) 00:07:39.725 9124.628 - 9175.040: 68.5881% ( 27) 00:07:39.725 9175.040 - 9225.452: 68.7641% ( 25) 00:07:39.725 9225.452 - 9275.865: 68.9330% ( 24) 00:07:39.725 9275.865 - 9326.277: 69.0738% ( 20) 00:07:39.725 9326.277 - 9376.689: 69.2145% ( 20) 00:07:39.725 9376.689 - 9427.102: 69.3553% ( 20) 00:07:39.725 9427.102 - 9477.514: 69.4749% ( 17) 00:07:39.725 9477.514 - 9527.926: 69.6368% ( 23) 00:07:39.725 9527.926 - 9578.338: 69.7917% ( 22) 00:07:39.725 9578.338 - 9628.751: 69.9395% ( 21) 00:07:39.725 9628.751 - 9679.163: 70.0802% ( 20) 00:07:39.725 9679.163 - 9729.575: 70.2984% ( 31) 00:07:39.725 9729.575 - 9779.988: 70.4251% ( 18) 00:07:39.725 9779.988 - 9830.400: 70.5448% ( 17) 00:07:39.725 9830.400 - 9880.812: 70.6644% ( 17) 00:07:39.725 9880.812 - 9931.225: 70.7841% ( 17) 00:07:39.725 9931.225 - 9981.637: 70.9671% ( 26) 00:07:39.725 9981.637 - 10032.049: 71.0938% ( 18) 00:07:39.725 10032.049 - 10082.462: 71.2345% ( 20) 00:07:39.725 10082.462 - 10132.874: 71.3753% ( 20) 00:07:39.725 10132.874 - 10183.286: 71.5301% ( 22) 00:07:39.725 10183.286 - 10233.698: 71.6850% ( 22) 00:07:39.725 10233.698 - 10284.111: 71.8609% ( 25) 00:07:39.725 10284.111 - 10334.523: 72.0158% ( 22) 00:07:39.725 10334.523 - 10384.935: 72.1706% ( 22) 00:07:39.725 10384.935 - 10435.348: 72.3325% ( 23) 00:07:39.725 10435.348 - 10485.760: 72.4873% ( 22) 00:07:39.725 10485.760 - 10536.172: 72.6140% ( 18) 00:07:39.725 10536.172 - 10586.585: 72.8111% ( 28) 00:07:39.725 10586.585 - 10636.997: 73.0011% ( 27) 00:07:39.725 10636.997 - 10687.409: 73.1489% ( 21) 00:07:39.725 10687.409 - 10737.822: 73.3108% ( 23) 00:07:39.725 10737.822 - 10788.234: 73.4657% ( 22) 00:07:39.725 10788.234 - 10838.646: 73.6135% ( 21) 00:07:39.725 10838.646 - 10889.058: 73.7894% ( 25) 00:07:39.725 10889.058 - 10939.471: 73.9372% ( 21) 00:07:39.725 10939.471 - 10989.883: 74.0780% ( 20) 00:07:39.725 10989.883 - 11040.295: 74.2258% ( 21) 00:07:39.725 11040.295 - 11090.708: 74.3666% ( 20) 00:07:39.725 11090.708 - 11141.120: 74.5144% ( 21) 00:07:39.725 11141.120 - 11191.532: 74.6270% ( 16) 00:07:39.725 11191.532 - 11241.945: 74.7255% ( 14) 00:07:39.725 11241.945 - 11292.357: 74.8663% ( 20) 00:07:39.725 11292.357 - 11342.769: 74.9930% ( 18) 00:07:39.725 11342.769 - 11393.182: 75.0915% ( 14) 00:07:39.725 11393.182 - 11443.594: 75.1971% ( 15) 00:07:39.725 11443.594 - 11494.006: 75.3449% ( 21) 00:07:39.725 11494.006 - 11544.418: 75.4645% ( 17) 00:07:39.725 11544.418 - 11594.831: 75.5842% ( 17) 00:07:39.725 11594.831 - 11645.243: 75.7038% ( 17) 00:07:39.725 11645.243 - 11695.655: 75.7953% ( 13) 00:07:39.725 11695.655 - 11746.068: 75.9502% ( 22) 00:07:39.725 11746.068 - 11796.480: 76.0557% ( 15) 00:07:39.725 11796.480 - 11846.892: 76.1543% ( 14) 00:07:39.725 11846.892 - 11897.305: 76.3232% ( 24) 00:07:39.725 11897.305 - 11947.717: 76.4499% ( 18) 00:07:39.725 11947.717 - 11998.129: 76.5907% ( 20) 00:07:39.725 11998.129 - 12048.542: 76.8018% ( 30) 00:07:39.725 12048.542 - 12098.954: 76.9355% ( 19) 00:07:39.725 12098.954 - 12149.366: 77.0481% ( 16) 00:07:39.725 12149.366 - 12199.778: 77.1889% ( 20) 00:07:39.725 12199.778 - 12250.191: 77.3860% ( 28) 00:07:39.725 12250.191 - 12300.603: 77.5408% ( 22) 00:07:39.725 12300.603 - 12351.015: 77.7168% ( 25) 00:07:39.725 12351.015 - 12401.428: 77.8294% ( 16) 00:07:39.725 12401.428 - 12451.840: 77.9702% ( 20) 00:07:39.725 12451.840 - 12502.252: 78.0898% ( 17) 00:07:39.725 12502.252 - 12552.665: 78.2306% ( 20) 00:07:39.725 12552.665 - 12603.077: 78.3291% ( 14) 00:07:39.725 12603.077 - 12653.489: 78.4699% ( 20) 00:07:39.725 12653.489 - 12703.902: 78.5332% ( 9) 00:07:39.725 12703.902 - 12754.314: 78.6458% ( 16) 00:07:39.725 12754.314 - 12804.726: 78.7936% ( 21) 00:07:39.725 12804.726 - 12855.138: 78.9696% ( 25) 00:07:39.725 12855.138 - 12905.551: 79.0892% ( 17) 00:07:39.725 12905.551 - 13006.375: 79.3356% ( 35) 00:07:39.725 13006.375 - 13107.200: 79.6734% ( 48) 00:07:39.725 13107.200 - 13208.025: 80.0465% ( 53) 00:07:39.725 13208.025 - 13308.849: 80.4054% ( 51) 00:07:39.725 13308.849 - 13409.674: 80.8277% ( 60) 00:07:39.725 13409.674 - 13510.498: 81.1726% ( 49) 00:07:39.725 13510.498 - 13611.323: 81.5526% ( 54) 00:07:39.725 13611.323 - 13712.148: 81.8905% ( 48) 00:07:39.725 13712.148 - 13812.972: 82.5310% ( 91) 00:07:39.725 13812.972 - 13913.797: 83.0518% ( 74) 00:07:39.725 13913.797 - 14014.622: 83.5586% ( 72) 00:07:39.725 14014.622 - 14115.446: 84.0231% ( 66) 00:07:39.725 14115.446 - 14216.271: 84.5932% ( 81) 00:07:39.725 14216.271 - 14317.095: 84.9733% ( 54) 00:07:39.725 14317.095 - 14417.920: 85.4659% ( 70) 00:07:39.725 14417.920 - 14518.745: 85.9868% ( 74) 00:07:39.725 14518.745 - 14619.569: 86.4794% ( 70) 00:07:39.725 14619.569 - 14720.394: 86.9510% ( 67) 00:07:39.725 14720.394 - 14821.218: 87.4367% ( 69) 00:07:39.725 14821.218 - 14922.043: 87.9997% ( 80) 00:07:39.725 14922.043 - 15022.868: 88.4783% ( 68) 00:07:39.725 15022.868 - 15123.692: 88.9569% ( 68) 00:07:39.725 15123.692 - 15224.517: 89.4989% ( 77) 00:07:39.725 15224.517 - 15325.342: 90.0479% ( 78) 00:07:39.725 15325.342 - 15426.166: 90.4772% ( 61) 00:07:39.725 15426.166 - 15526.991: 91.1458% ( 95) 00:07:39.725 15526.991 - 15627.815: 91.5822% ( 62) 00:07:39.725 15627.815 - 15728.640: 92.0819% ( 71) 00:07:39.725 15728.640 - 15829.465: 92.5253% ( 63) 00:07:39.725 15829.465 - 15930.289: 92.9899% ( 66) 00:07:39.725 15930.289 - 16031.114: 93.3207% ( 47) 00:07:39.725 16031.114 - 16131.938: 93.6092% ( 41) 00:07:39.725 16131.938 - 16232.763: 93.8133% ( 29) 00:07:39.725 16232.763 - 16333.588: 94.0667% ( 36) 00:07:39.725 16333.588 - 16434.412: 94.2849% ( 31) 00:07:39.725 16434.412 - 16535.237: 94.5101% ( 32) 00:07:39.725 16535.237 - 16636.062: 94.6861% ( 25) 00:07:39.725 16636.062 - 16736.886: 94.8691% ( 26) 00:07:39.725 16736.886 - 16837.711: 95.0028% ( 19) 00:07:39.725 16837.711 - 16938.535: 95.1436% ( 20) 00:07:39.725 16938.535 - 17039.360: 95.3125% ( 24) 00:07:39.725 17039.360 - 17140.185: 95.4673% ( 22) 00:07:39.725 17140.185 - 17241.009: 95.6855% ( 31) 00:07:39.725 17241.009 - 17341.834: 95.8685% ( 26) 00:07:39.725 17341.834 - 17442.658: 96.1149% ( 35) 00:07:39.725 17442.658 - 17543.483: 96.3190% ( 29) 00:07:39.725 17543.483 - 17644.308: 96.5020% ( 26) 00:07:39.725 17644.308 - 17745.132: 96.7131% ( 30) 00:07:39.725 17745.132 - 17845.957: 96.9876% ( 39) 00:07:39.725 17845.957 - 17946.782: 97.1073% ( 17) 00:07:39.725 17946.782 - 18047.606: 97.2691% ( 23) 00:07:39.725 18047.606 - 18148.431: 97.4521% ( 26) 00:07:39.725 18148.431 - 18249.255: 97.6140% ( 23) 00:07:39.725 18249.255 - 18350.080: 97.7829% ( 24) 00:07:39.725 18350.080 - 18450.905: 97.9378% ( 22) 00:07:39.725 18450.905 - 18551.729: 98.0856% ( 21) 00:07:39.725 18551.729 - 18652.554: 98.2897% ( 29) 00:07:39.725 18652.554 - 18753.378: 98.3953% ( 15) 00:07:39.725 18753.378 - 18854.203: 98.4727% ( 11) 00:07:39.725 18854.203 - 18955.028: 98.5360% ( 9) 00:07:39.725 18955.028 - 19055.852: 98.6698% ( 19) 00:07:39.725 19055.852 - 19156.677: 98.7190% ( 7) 00:07:39.725 19156.677 - 19257.502: 98.7613% ( 6) 00:07:39.725 19257.502 - 19358.326: 98.8316% ( 10) 00:07:39.725 19358.326 - 19459.151: 98.8950% ( 9) 00:07:39.725 19459.151 - 19559.975: 98.9724% ( 11) 00:07:39.725 19559.975 - 19660.800: 99.0287% ( 8) 00:07:39.725 19660.800 - 19761.625: 99.0639% ( 5) 00:07:39.725 19761.625 - 19862.449: 99.0991% ( 5) 00:07:39.725 29239.138 - 29440.788: 99.1624% ( 9) 00:07:39.725 29440.788 - 29642.437: 99.1906% ( 4) 00:07:39.725 29642.437 - 29844.086: 99.2399% ( 7) 00:07:39.725 29844.086 - 30045.735: 99.2891% ( 7) 00:07:39.725 30045.735 - 30247.385: 99.3454% ( 8) 00:07:39.725 30247.385 - 30449.034: 99.3947% ( 7) 00:07:39.725 30449.034 - 30650.683: 99.4510% ( 8) 00:07:39.725 30650.683 - 30852.332: 99.5003% ( 7) 00:07:39.725 30852.332 - 31053.982: 99.5495% ( 7) 00:07:39.725 37305.108 - 37506.757: 99.5707% ( 3) 00:07:39.725 37506.757 - 37708.406: 99.6340% ( 9) 00:07:39.725 37708.406 - 37910.055: 99.6692% ( 5) 00:07:39.725 37910.055 - 38111.705: 99.7255% ( 8) 00:07:39.725 38111.705 - 38313.354: 99.7959% ( 10) 00:07:39.725 38313.354 - 38515.003: 99.8452% ( 7) 00:07:39.725 38515.003 - 38716.652: 99.9015% ( 8) 00:07:39.725 38716.652 - 38918.302: 99.9437% ( 6) 00:07:39.725 38918.302 - 39119.951: 100.0000% ( 8) 00:07:39.725 00:07:39.725 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.725 ============================================================================== 00:07:39.725 Range in us Cumulative IO count 00:07:39.725 5595.766 - 5620.972: 0.0070% ( 1) 00:07:39.725 5620.972 - 5646.178: 0.0211% ( 2) 00:07:39.725 5646.178 - 5671.385: 0.0845% ( 9) 00:07:39.725 5671.385 - 5696.591: 0.1900% ( 15) 00:07:39.726 5696.591 - 5721.797: 0.4645% ( 39) 00:07:39.726 5721.797 - 5747.003: 0.6827% ( 31) 00:07:39.726 5747.003 - 5772.209: 1.1684% ( 69) 00:07:39.726 5772.209 - 5797.415: 1.9848% ( 116) 00:07:39.726 5797.415 - 5822.622: 2.7872% ( 114) 00:07:39.726 5822.622 - 5847.828: 3.6599% ( 124) 00:07:39.726 5847.828 - 5873.034: 4.8494% ( 169) 00:07:39.726 5873.034 - 5898.240: 5.7855% ( 133) 00:07:39.726 5898.240 - 5923.446: 6.9116% ( 160) 00:07:39.726 5923.446 - 5948.652: 8.2981% ( 197) 00:07:39.726 5948.652 - 5973.858: 9.7410% ( 205) 00:07:39.726 5973.858 - 5999.065: 11.3739% ( 232) 00:07:39.726 5999.065 - 6024.271: 12.6197% ( 177) 00:07:39.726 6024.271 - 6049.477: 14.1118% ( 212) 00:07:39.726 6049.477 - 6074.683: 15.8080% ( 241) 00:07:39.726 6074.683 - 6099.889: 17.4409% ( 232) 00:07:39.726 6099.889 - 6125.095: 18.9611% ( 216) 00:07:39.726 6125.095 - 6150.302: 20.4673% ( 214) 00:07:39.726 6150.302 - 6175.508: 22.0228% ( 221) 00:07:39.726 6175.508 - 6200.714: 23.6205% ( 227) 00:07:39.726 6200.714 - 6225.920: 25.2111% ( 226) 00:07:39.726 6225.920 - 6251.126: 26.8863% ( 238) 00:07:39.726 6251.126 - 6276.332: 28.5332% ( 234) 00:07:39.726 6276.332 - 6301.538: 30.1731% ( 233) 00:07:39.726 6301.538 - 6326.745: 31.9890% ( 258) 00:07:39.726 6326.745 - 6351.951: 33.7908% ( 256) 00:07:39.726 6351.951 - 6377.157: 35.5926% ( 256) 00:07:39.726 6377.157 - 6402.363: 37.3100% ( 244) 00:07:39.726 6402.363 - 6427.569: 39.0203% ( 243) 00:07:39.726 6427.569 - 6452.775: 40.6813% ( 236) 00:07:39.726 6452.775 - 6503.188: 44.3553% ( 522) 00:07:39.726 6503.188 - 6553.600: 47.6140% ( 463) 00:07:39.726 6553.600 - 6604.012: 50.3660% ( 391) 00:07:39.726 6604.012 - 6654.425: 52.7097% ( 333) 00:07:39.726 6654.425 - 6704.837: 54.5890% ( 267) 00:07:39.726 6704.837 - 6755.249: 56.0177% ( 203) 00:07:39.726 6755.249 - 6805.662: 57.0383% ( 145) 00:07:39.726 6805.662 - 6856.074: 57.8477% ( 115) 00:07:39.726 6856.074 - 6906.486: 58.5163% ( 95) 00:07:39.726 6906.486 - 6956.898: 59.1498% ( 90) 00:07:39.726 6956.898 - 7007.311: 59.7128% ( 80) 00:07:39.726 7007.311 - 7057.723: 60.2055% ( 70) 00:07:39.726 7057.723 - 7108.135: 60.5997% ( 56) 00:07:39.726 7108.135 - 7158.548: 60.9375% ( 48) 00:07:39.726 7158.548 - 7208.960: 61.2401% ( 43) 00:07:39.726 7208.960 - 7259.372: 61.4935% ( 36) 00:07:39.726 7259.372 - 7309.785: 61.7751% ( 40) 00:07:39.726 7309.785 - 7360.197: 61.9651% ( 27) 00:07:39.726 7360.197 - 7410.609: 62.1129% ( 21) 00:07:39.726 7410.609 - 7461.022: 62.2466% ( 19) 00:07:39.726 7461.022 - 7511.434: 62.3663% ( 17) 00:07:39.726 7511.434 - 7561.846: 62.5141% ( 21) 00:07:39.726 7561.846 - 7612.258: 62.7111% ( 28) 00:07:39.726 7612.258 - 7662.671: 62.9505% ( 34) 00:07:39.726 7662.671 - 7713.083: 63.0842% ( 19) 00:07:39.726 7713.083 - 7763.495: 63.2672% ( 26) 00:07:39.726 7763.495 - 7813.908: 63.3798% ( 16) 00:07:39.726 7813.908 - 7864.320: 63.5135% ( 19) 00:07:39.726 7864.320 - 7914.732: 63.6543% ( 20) 00:07:39.726 7914.732 - 7965.145: 63.7950% ( 20) 00:07:39.726 7965.145 - 8015.557: 63.9217% ( 18) 00:07:39.726 8015.557 - 8065.969: 64.1540% ( 33) 00:07:39.726 8065.969 - 8116.382: 64.3088% ( 22) 00:07:39.726 8116.382 - 8166.794: 64.4426% ( 19) 00:07:39.726 8166.794 - 8217.206: 64.6396% ( 28) 00:07:39.726 8217.206 - 8267.618: 64.8297% ( 27) 00:07:39.726 8267.618 - 8318.031: 64.9916% ( 23) 00:07:39.726 8318.031 - 8368.443: 65.2168% ( 32) 00:07:39.726 8368.443 - 8418.855: 65.4913% ( 39) 00:07:39.726 8418.855 - 8469.268: 65.7235% ( 33) 00:07:39.726 8469.268 - 8519.680: 65.9840% ( 37) 00:07:39.726 8519.680 - 8570.092: 66.2444% ( 37) 00:07:39.726 8570.092 - 8620.505: 66.5048% ( 37) 00:07:39.726 8620.505 - 8670.917: 66.7511% ( 35) 00:07:39.726 8670.917 - 8721.329: 66.9975% ( 35) 00:07:39.726 8721.329 - 8771.742: 67.2860% ( 41) 00:07:39.726 8771.742 - 8822.154: 67.5253% ( 34) 00:07:39.726 8822.154 - 8872.566: 67.7646% ( 34) 00:07:39.726 8872.566 - 8922.978: 68.0039% ( 34) 00:07:39.726 8922.978 - 8973.391: 68.2503% ( 35) 00:07:39.726 8973.391 - 9023.803: 68.5037% ( 36) 00:07:39.726 9023.803 - 9074.215: 68.6585% ( 22) 00:07:39.726 9074.215 - 9124.628: 68.8204% ( 23) 00:07:39.726 9124.628 - 9175.040: 68.9893% ( 24) 00:07:39.726 9175.040 - 9225.452: 69.1793% ( 27) 00:07:39.726 9225.452 - 9275.865: 69.3342% ( 22) 00:07:39.726 9275.865 - 9326.277: 69.5101% ( 25) 00:07:39.726 9326.277 - 9376.689: 69.6791% ( 24) 00:07:39.726 9376.689 - 9427.102: 69.8409% ( 23) 00:07:39.726 9427.102 - 9477.514: 69.9958% ( 22) 00:07:39.726 9477.514 - 9527.926: 70.1295% ( 19) 00:07:39.726 9527.926 - 9578.338: 70.2703% ( 20) 00:07:39.726 9578.338 - 9628.751: 70.4533% ( 26) 00:07:39.726 9628.751 - 9679.163: 70.5800% ( 18) 00:07:39.726 9679.163 - 9729.575: 70.6996% ( 17) 00:07:39.726 9729.575 - 9779.988: 70.7981% ( 14) 00:07:39.726 9779.988 - 9830.400: 70.9108% ( 16) 00:07:39.726 9830.400 - 9880.812: 71.0163% ( 15) 00:07:39.726 9880.812 - 9931.225: 71.1289% ( 16) 00:07:39.726 9931.225 - 9981.637: 71.2345% ( 15) 00:07:39.726 9981.637 - 10032.049: 71.3190% ( 12) 00:07:39.726 10032.049 - 10082.462: 71.3894% ( 10) 00:07:39.726 10082.462 - 10132.874: 71.4668% ( 11) 00:07:39.726 10132.874 - 10183.286: 71.5653% ( 14) 00:07:39.726 10183.286 - 10233.698: 71.6920% ( 18) 00:07:39.726 10233.698 - 10284.111: 71.8046% ( 16) 00:07:39.726 10284.111 - 10334.523: 71.9595% ( 22) 00:07:39.726 10334.523 - 10384.935: 72.1284% ( 24) 00:07:39.726 10384.935 - 10435.348: 72.2903% ( 23) 00:07:39.726 10435.348 - 10485.760: 72.4381% ( 21) 00:07:39.726 10485.760 - 10536.172: 72.5999% ( 23) 00:07:39.726 10536.172 - 10586.585: 72.7548% ( 22) 00:07:39.726 10586.585 - 10636.997: 72.9307% ( 25) 00:07:39.726 10636.997 - 10687.409: 73.1137% ( 26) 00:07:39.726 10687.409 - 10737.822: 73.2967% ( 26) 00:07:39.726 10737.822 - 10788.234: 73.4586% ( 23) 00:07:39.726 10788.234 - 10838.646: 73.6205% ( 23) 00:07:39.726 10838.646 - 10889.058: 73.7683% ( 21) 00:07:39.726 10889.058 - 10939.471: 74.0006% ( 33) 00:07:39.726 10939.471 - 10989.883: 74.1976% ( 28) 00:07:39.726 10989.883 - 11040.295: 74.3666% ( 24) 00:07:39.726 11040.295 - 11090.708: 74.5636% ( 28) 00:07:39.726 11090.708 - 11141.120: 74.7255% ( 23) 00:07:39.726 11141.120 - 11191.532: 74.8874% ( 23) 00:07:39.726 11191.532 - 11241.945: 75.0211% ( 19) 00:07:39.726 11241.945 - 11292.357: 75.1267% ( 15) 00:07:39.726 11292.357 - 11342.769: 75.2182% ( 13) 00:07:39.726 11342.769 - 11393.182: 75.2956% ( 11) 00:07:39.726 11393.182 - 11443.594: 75.3730% ( 11) 00:07:39.726 11443.594 - 11494.006: 75.4505% ( 11) 00:07:39.726 11494.006 - 11544.418: 75.5560% ( 15) 00:07:39.726 11544.418 - 11594.831: 75.6264% ( 10) 00:07:39.726 11594.831 - 11645.243: 75.7179% ( 13) 00:07:39.726 11645.243 - 11695.655: 75.7883% ( 10) 00:07:39.726 11695.655 - 11746.068: 75.9009% ( 16) 00:07:39.726 11746.068 - 11796.480: 76.0065% ( 15) 00:07:39.726 11796.480 - 11846.892: 76.0839% ( 11) 00:07:39.726 11846.892 - 11897.305: 76.1543% ( 10) 00:07:39.726 11897.305 - 11947.717: 76.2387% ( 12) 00:07:39.726 11947.717 - 11998.129: 76.3373% ( 14) 00:07:39.726 11998.129 - 12048.542: 76.4780% ( 20) 00:07:39.726 12048.542 - 12098.954: 76.6329% ( 22) 00:07:39.726 12098.954 - 12149.366: 76.7525% ( 17) 00:07:39.726 12149.366 - 12199.778: 76.8651% ( 16) 00:07:39.726 12199.778 - 12250.191: 76.9778% ( 16) 00:07:39.726 12250.191 - 12300.603: 77.1044% ( 18) 00:07:39.726 12300.603 - 12351.015: 77.2452% ( 20) 00:07:39.726 12351.015 - 12401.428: 77.3860% ( 20) 00:07:39.726 12401.428 - 12451.840: 77.5197% ( 19) 00:07:39.726 12451.840 - 12502.252: 77.6675% ( 21) 00:07:39.726 12502.252 - 12552.665: 77.7942% ( 18) 00:07:39.726 12552.665 - 12603.077: 77.9279% ( 19) 00:07:39.726 12603.077 - 12653.489: 78.0546% ( 18) 00:07:39.726 12653.489 - 12703.902: 78.2306% ( 25) 00:07:39.726 12703.902 - 12754.314: 78.3713% ( 20) 00:07:39.726 12754.314 - 12804.726: 78.5332% ( 23) 00:07:39.726 12804.726 - 12855.138: 78.6669% ( 19) 00:07:39.727 12855.138 - 12905.551: 78.7866% ( 17) 00:07:39.727 12905.551 - 13006.375: 79.0541% ( 38) 00:07:39.727 13006.375 - 13107.200: 79.3285% ( 39) 00:07:39.727 13107.200 - 13208.025: 79.5749% ( 35) 00:07:39.727 13208.025 - 13308.849: 79.8212% ( 35) 00:07:39.727 13308.849 - 13409.674: 80.1239% ( 43) 00:07:39.727 13409.674 - 13510.498: 80.5251% ( 57) 00:07:39.727 13510.498 - 13611.323: 80.9122% ( 55) 00:07:39.727 13611.323 - 13712.148: 81.3485% ( 62) 00:07:39.727 13712.148 - 13812.972: 81.7568% ( 58) 00:07:39.727 13812.972 - 13913.797: 82.2283% ( 67) 00:07:39.727 13913.797 - 14014.622: 82.7843% ( 79) 00:07:39.727 14014.622 - 14115.446: 83.3685% ( 83) 00:07:39.727 14115.446 - 14216.271: 83.9034% ( 76) 00:07:39.727 14216.271 - 14317.095: 84.5721% ( 95) 00:07:39.727 14317.095 - 14417.920: 85.1633% ( 84) 00:07:39.727 14417.920 - 14518.745: 85.7334% ( 81) 00:07:39.727 14518.745 - 14619.569: 86.3105% ( 82) 00:07:39.727 14619.569 - 14720.394: 86.8032% ( 70) 00:07:39.727 14720.394 - 14821.218: 87.3803% ( 82) 00:07:39.727 14821.218 - 14922.043: 88.0208% ( 91) 00:07:39.727 14922.043 - 15022.868: 88.6754% ( 93) 00:07:39.727 15022.868 - 15123.692: 89.2948% ( 88) 00:07:39.727 15123.692 - 15224.517: 89.8367% ( 77) 00:07:39.727 15224.517 - 15325.342: 90.3857% ( 78) 00:07:39.727 15325.342 - 15426.166: 90.9980% ( 87) 00:07:39.727 15426.166 - 15526.991: 91.5400% ( 77) 00:07:39.727 15526.991 - 15627.815: 92.0115% ( 67) 00:07:39.727 15627.815 - 15728.640: 92.5253% ( 73) 00:07:39.727 15728.640 - 15829.465: 92.8984% ( 53) 00:07:39.727 15829.465 - 15930.289: 93.2995% ( 57) 00:07:39.727 15930.289 - 16031.114: 93.5459% ( 35) 00:07:39.727 16031.114 - 16131.938: 93.7993% ( 36) 00:07:39.727 16131.938 - 16232.763: 94.0667% ( 38) 00:07:39.727 16232.763 - 16333.588: 94.3131% ( 35) 00:07:39.727 16333.588 - 16434.412: 94.4961% ( 26) 00:07:39.727 16434.412 - 16535.237: 94.7072% ( 30) 00:07:39.727 16535.237 - 16636.062: 94.8902% ( 26) 00:07:39.727 16636.062 - 16736.886: 95.0662% ( 25) 00:07:39.727 16736.886 - 16837.711: 95.1999% ( 19) 00:07:39.727 16837.711 - 16938.535: 95.3195% ( 17) 00:07:39.727 16938.535 - 17039.360: 95.4392% ( 17) 00:07:39.727 17039.360 - 17140.185: 95.6363% ( 28) 00:07:39.727 17140.185 - 17241.009: 95.8263% ( 27) 00:07:39.727 17241.009 - 17341.834: 95.9741% ( 21) 00:07:39.727 17341.834 - 17442.658: 96.1641% ( 27) 00:07:39.727 17442.658 - 17543.483: 96.3612% ( 28) 00:07:39.727 17543.483 - 17644.308: 96.5020% ( 20) 00:07:39.727 17644.308 - 17745.132: 96.6357% ( 19) 00:07:39.727 17745.132 - 17845.957: 96.7483% ( 16) 00:07:39.727 17845.957 - 17946.782: 96.8961% ( 21) 00:07:39.727 17946.782 - 18047.606: 97.0510% ( 22) 00:07:39.727 18047.606 - 18148.431: 97.2199% ( 24) 00:07:39.727 18148.431 - 18249.255: 97.4521% ( 33) 00:07:39.727 18249.255 - 18350.080: 97.6562% ( 29) 00:07:39.727 18350.080 - 18450.905: 97.9026% ( 35) 00:07:39.727 18450.905 - 18551.729: 98.1560% ( 36) 00:07:39.727 18551.729 - 18652.554: 98.3319% ( 25) 00:07:39.727 18652.554 - 18753.378: 98.4797% ( 21) 00:07:39.727 18753.378 - 18854.203: 98.6346% ( 22) 00:07:39.727 18854.203 - 18955.028: 98.7965% ( 23) 00:07:39.727 18955.028 - 19055.852: 98.9372% ( 20) 00:07:39.727 19055.852 - 19156.677: 99.0428% ( 15) 00:07:39.727 19156.677 - 19257.502: 99.0991% ( 8) 00:07:39.727 27424.295 - 27625.945: 99.1061% ( 1) 00:07:39.727 27625.945 - 27827.594: 99.1624% ( 8) 00:07:39.727 27827.594 - 28029.243: 99.2188% ( 8) 00:07:39.727 28029.243 - 28230.892: 99.2751% ( 8) 00:07:39.727 28230.892 - 28432.542: 99.3314% ( 8) 00:07:39.727 28432.542 - 28634.191: 99.3877% ( 8) 00:07:39.727 28634.191 - 28835.840: 99.4440% ( 8) 00:07:39.727 28835.840 - 29037.489: 99.5073% ( 9) 00:07:39.727 29037.489 - 29239.138: 99.5495% ( 6) 00:07:39.727 36095.212 - 36296.862: 99.5777% ( 4) 00:07:39.727 36296.862 - 36498.511: 99.6410% ( 9) 00:07:39.727 36498.511 - 36700.160: 99.6974% ( 8) 00:07:39.727 36700.160 - 36901.809: 99.7537% ( 8) 00:07:39.727 36901.809 - 37103.458: 99.8100% ( 8) 00:07:39.727 37103.458 - 37305.108: 99.8733% ( 9) 00:07:39.727 37305.108 - 37506.757: 99.9296% ( 8) 00:07:39.727 37506.757 - 37708.406: 99.9859% ( 8) 00:07:39.727 37708.406 - 37910.055: 100.0000% ( 2) 00:07:39.727 00:07:39.727 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.727 ============================================================================== 00:07:39.727 Range in us Cumulative IO count 00:07:39.727 5620.972 - 5646.178: 0.0070% ( 1) 00:07:39.727 5646.178 - 5671.385: 0.0633% ( 8) 00:07:39.727 5671.385 - 5696.591: 0.1126% ( 7) 00:07:39.727 5696.591 - 5721.797: 0.2956% ( 26) 00:07:39.727 5721.797 - 5747.003: 0.7953% ( 71) 00:07:39.727 5747.003 - 5772.209: 1.4288% ( 90) 00:07:39.727 5772.209 - 5797.415: 1.8722% ( 63) 00:07:39.727 5797.415 - 5822.622: 2.4352% ( 80) 00:07:39.727 5822.622 - 5847.828: 3.4065% ( 138) 00:07:39.727 5847.828 - 5873.034: 4.3778% ( 138) 00:07:39.727 5873.034 - 5898.240: 5.6869% ( 186) 00:07:39.727 5898.240 - 5923.446: 7.0735% ( 197) 00:07:39.727 5923.446 - 5948.652: 8.5515% ( 210) 00:07:39.727 5948.652 - 5973.858: 9.9099% ( 193) 00:07:39.727 5973.858 - 5999.065: 11.2542% ( 191) 00:07:39.727 5999.065 - 6024.271: 12.6971% ( 205) 00:07:39.727 6024.271 - 6049.477: 14.3651% ( 237) 00:07:39.727 6049.477 - 6074.683: 15.9628% ( 227) 00:07:39.727 6074.683 - 6099.889: 17.5746% ( 229) 00:07:39.727 6099.889 - 6125.095: 19.1160% ( 219) 00:07:39.727 6125.095 - 6150.302: 20.6715% ( 221) 00:07:39.727 6150.302 - 6175.508: 22.2480% ( 224) 00:07:39.727 6175.508 - 6200.714: 23.8457% ( 227) 00:07:39.727 6200.714 - 6225.920: 25.5068% ( 236) 00:07:39.727 6225.920 - 6251.126: 27.2593% ( 249) 00:07:39.727 6251.126 - 6276.332: 28.9485% ( 240) 00:07:39.727 6276.332 - 6301.538: 30.7010% ( 249) 00:07:39.727 6301.538 - 6326.745: 32.4535% ( 249) 00:07:39.727 6326.745 - 6351.951: 34.1990% ( 248) 00:07:39.727 6351.951 - 6377.157: 35.9938% ( 255) 00:07:39.727 6377.157 - 6402.363: 37.8378% ( 262) 00:07:39.727 6402.363 - 6427.569: 39.5904% ( 249) 00:07:39.727 6427.569 - 6452.775: 41.3570% ( 251) 00:07:39.727 6452.775 - 6503.188: 44.8972% ( 503) 00:07:39.727 6503.188 - 6553.600: 48.1278% ( 459) 00:07:39.727 6553.600 - 6604.012: 50.9150% ( 396) 00:07:39.727 6604.012 - 6654.425: 53.2939% ( 338) 00:07:39.727 6654.425 - 6704.837: 55.1168% ( 259) 00:07:39.727 6704.837 - 6755.249: 56.3978% ( 182) 00:07:39.727 6755.249 - 6805.662: 57.3691% ( 138) 00:07:39.727 6805.662 - 6856.074: 58.1644% ( 113) 00:07:39.727 6856.074 - 6906.486: 58.8049% ( 91) 00:07:39.727 6906.486 - 6956.898: 59.4243% ( 88) 00:07:39.727 6956.898 - 7007.311: 59.9592% ( 76) 00:07:39.727 7007.311 - 7057.723: 60.4237% ( 66) 00:07:39.727 7057.723 - 7108.135: 60.8108% ( 55) 00:07:39.727 7108.135 - 7158.548: 61.1979% ( 55) 00:07:39.727 7158.548 - 7208.960: 61.4935% ( 42) 00:07:39.727 7208.960 - 7259.372: 61.7680% ( 39) 00:07:39.727 7259.372 - 7309.785: 62.0144% ( 35) 00:07:39.727 7309.785 - 7360.197: 62.1762% ( 23) 00:07:39.727 7360.197 - 7410.609: 62.3240% ( 21) 00:07:39.727 7410.609 - 7461.022: 62.4859% ( 23) 00:07:39.727 7461.022 - 7511.434: 62.6408% ( 22) 00:07:39.727 7511.434 - 7561.846: 62.7745% ( 19) 00:07:39.727 7561.846 - 7612.258: 62.9153% ( 20) 00:07:39.727 7612.258 - 7662.671: 63.0631% ( 21) 00:07:39.727 7662.671 - 7713.083: 63.2038% ( 20) 00:07:39.727 7713.083 - 7763.495: 63.3446% ( 20) 00:07:39.727 7763.495 - 7813.908: 63.5346% ( 27) 00:07:39.727 7813.908 - 7864.320: 63.7176% ( 26) 00:07:39.727 7864.320 - 7914.732: 63.9006% ( 26) 00:07:39.727 7914.732 - 7965.145: 64.0555% ( 22) 00:07:39.727 7965.145 - 8015.557: 64.2173% ( 23) 00:07:39.727 8015.557 - 8065.969: 64.3651% ( 21) 00:07:39.727 8065.969 - 8116.382: 64.4918% ( 18) 00:07:39.727 8116.382 - 8166.794: 64.6256% ( 19) 00:07:39.727 8166.794 - 8217.206: 64.7804% ( 22) 00:07:39.727 8217.206 - 8267.618: 64.9423% ( 23) 00:07:39.727 8267.618 - 8318.031: 65.1112% ( 24) 00:07:39.727 8318.031 - 8368.443: 65.2942% ( 26) 00:07:39.727 8368.443 - 8418.855: 65.5265% ( 33) 00:07:39.727 8418.855 - 8469.268: 65.7798% ( 36) 00:07:39.727 8469.268 - 8519.680: 66.0051% ( 32) 00:07:39.727 8519.680 - 8570.092: 66.2233% ( 31) 00:07:39.727 8570.092 - 8620.505: 66.4485% ( 32) 00:07:39.727 8620.505 - 8670.917: 66.6737% ( 32) 00:07:39.727 8670.917 - 8721.329: 66.9130% ( 34) 00:07:39.727 8721.329 - 8771.742: 67.1523% ( 34) 00:07:39.727 8771.742 - 8822.154: 67.3916% ( 34) 00:07:39.727 8822.154 - 8872.566: 67.6239% ( 33) 00:07:39.727 8872.566 - 8922.978: 67.8702% ( 35) 00:07:39.727 8922.978 - 8973.391: 68.0814% ( 30) 00:07:39.727 8973.391 - 9023.803: 68.2784% ( 28) 00:07:39.727 9023.803 - 9074.215: 68.4755% ( 28) 00:07:39.727 9074.215 - 9124.628: 68.6585% ( 26) 00:07:39.727 9124.628 - 9175.040: 68.8485% ( 27) 00:07:39.727 9175.040 - 9225.452: 69.0597% ( 30) 00:07:39.727 9225.452 - 9275.865: 69.2145% ( 22) 00:07:39.727 9275.865 - 9326.277: 69.3975% ( 26) 00:07:39.727 9326.277 - 9376.689: 69.5735% ( 25) 00:07:39.727 9376.689 - 9427.102: 69.7424% ( 24) 00:07:39.727 9427.102 - 9477.514: 69.9395% ( 28) 00:07:39.727 9477.514 - 9527.926: 70.1295% ( 27) 00:07:39.727 9527.926 - 9578.338: 70.3125% ( 26) 00:07:39.727 9578.338 - 9628.751: 70.4673% ( 22) 00:07:39.727 9628.751 - 9679.163: 70.6222% ( 22) 00:07:39.727 9679.163 - 9729.575: 70.7489% ( 18) 00:07:39.727 9729.575 - 9779.988: 70.9248% ( 25) 00:07:39.727 9779.988 - 9830.400: 71.1219% ( 28) 00:07:39.728 9830.400 - 9880.812: 71.2486% ( 18) 00:07:39.728 9880.812 - 9931.225: 71.3753% ( 18) 00:07:39.728 9931.225 - 9981.637: 71.4809% ( 15) 00:07:39.728 9981.637 - 10032.049: 71.5935% ( 16) 00:07:39.728 10032.049 - 10082.462: 71.6709% ( 11) 00:07:39.728 10082.462 - 10132.874: 71.7624% ( 13) 00:07:39.728 10132.874 - 10183.286: 71.8328% ( 10) 00:07:39.728 10183.286 - 10233.698: 71.9383% ( 15) 00:07:39.728 10233.698 - 10284.111: 72.0650% ( 18) 00:07:39.728 10284.111 - 10334.523: 72.1776% ( 16) 00:07:39.728 10334.523 - 10384.935: 72.2903% ( 16) 00:07:39.728 10384.935 - 10435.348: 72.4451% ( 22) 00:07:39.728 10435.348 - 10485.760: 72.5929% ( 21) 00:07:39.728 10485.760 - 10536.172: 72.7618% ( 24) 00:07:39.728 10536.172 - 10586.585: 72.9026% ( 20) 00:07:39.728 10586.585 - 10636.997: 73.0574% ( 22) 00:07:39.728 10636.997 - 10687.409: 73.1912% ( 19) 00:07:39.728 10687.409 - 10737.822: 73.3460% ( 22) 00:07:39.728 10737.822 - 10788.234: 73.5008% ( 22) 00:07:39.728 10788.234 - 10838.646: 73.6627% ( 23) 00:07:39.728 10838.646 - 10889.058: 73.8316% ( 24) 00:07:39.728 10889.058 - 10939.471: 73.9935% ( 23) 00:07:39.728 10939.471 - 10989.883: 74.1695% ( 25) 00:07:39.728 10989.883 - 11040.295: 74.3525% ( 26) 00:07:39.728 11040.295 - 11090.708: 74.5355% ( 26) 00:07:39.728 11090.708 - 11141.120: 74.6692% ( 19) 00:07:39.728 11141.120 - 11191.532: 74.8100% ( 20) 00:07:39.728 11191.532 - 11241.945: 74.9367% ( 18) 00:07:39.728 11241.945 - 11292.357: 75.0774% ( 20) 00:07:39.728 11292.357 - 11342.769: 75.1900% ( 16) 00:07:39.728 11342.769 - 11393.182: 75.3026% ( 16) 00:07:39.728 11393.182 - 11443.594: 75.3871% ( 12) 00:07:39.728 11443.594 - 11494.006: 75.4856% ( 14) 00:07:39.728 11494.006 - 11544.418: 75.5560% ( 10) 00:07:39.728 11544.418 - 11594.831: 75.6686% ( 16) 00:07:39.728 11594.831 - 11645.243: 75.7953% ( 18) 00:07:39.728 11645.243 - 11695.655: 75.9291% ( 19) 00:07:39.728 11695.655 - 11746.068: 76.0135% ( 12) 00:07:39.728 11746.068 - 11796.480: 76.1050% ( 13) 00:07:39.728 11796.480 - 11846.892: 76.1965% ( 13) 00:07:39.728 11846.892 - 11897.305: 76.2950% ( 14) 00:07:39.728 11897.305 - 11947.717: 76.3865% ( 13) 00:07:39.728 11947.717 - 11998.129: 76.4499% ( 9) 00:07:39.728 11998.129 - 12048.542: 76.5484% ( 14) 00:07:39.728 12048.542 - 12098.954: 76.6540% ( 15) 00:07:39.728 12098.954 - 12149.366: 76.7877% ( 19) 00:07:39.728 12149.366 - 12199.778: 76.9285% ( 20) 00:07:39.728 12199.778 - 12250.191: 77.1326% ( 29) 00:07:39.728 12250.191 - 12300.603: 77.3086% ( 25) 00:07:39.728 12300.603 - 12351.015: 77.4493% ( 20) 00:07:39.728 12351.015 - 12401.428: 77.5901% ( 20) 00:07:39.728 12401.428 - 12451.840: 77.7309% ( 20) 00:07:39.728 12451.840 - 12502.252: 77.8575% ( 18) 00:07:39.728 12502.252 - 12552.665: 78.0053% ( 21) 00:07:39.728 12552.665 - 12603.077: 78.1320% ( 18) 00:07:39.728 12603.077 - 12653.489: 78.2728% ( 20) 00:07:39.728 12653.489 - 12703.902: 78.4206% ( 21) 00:07:39.728 12703.902 - 12754.314: 78.5684% ( 21) 00:07:39.728 12754.314 - 12804.726: 78.7162% ( 21) 00:07:39.728 12804.726 - 12855.138: 78.8359% ( 17) 00:07:39.728 12855.138 - 12905.551: 78.9344% ( 14) 00:07:39.728 12905.551 - 13006.375: 79.1033% ( 24) 00:07:39.728 13006.375 - 13107.200: 79.3567% ( 36) 00:07:39.728 13107.200 - 13208.025: 79.5960% ( 34) 00:07:39.728 13208.025 - 13308.849: 79.9409% ( 49) 00:07:39.728 13308.849 - 13409.674: 80.3913% ( 64) 00:07:39.728 13409.674 - 13510.498: 80.8207% ( 61) 00:07:39.728 13510.498 - 13611.323: 81.3274% ( 72) 00:07:39.728 13611.323 - 13712.148: 81.8412% ( 73) 00:07:39.728 13712.148 - 13812.972: 82.3972% ( 79) 00:07:39.728 13812.972 - 13913.797: 82.9533% ( 79) 00:07:39.728 13913.797 - 14014.622: 83.5023% ( 78) 00:07:39.728 14014.622 - 14115.446: 84.0372% ( 76) 00:07:39.728 14115.446 - 14216.271: 84.5369% ( 71) 00:07:39.728 14216.271 - 14317.095: 84.9521% ( 59) 00:07:39.728 14317.095 - 14417.920: 85.3604% ( 58) 00:07:39.728 14417.920 - 14518.745: 85.8390% ( 68) 00:07:39.728 14518.745 - 14619.569: 86.2965% ( 65) 00:07:39.728 14619.569 - 14720.394: 86.8314% ( 76) 00:07:39.728 14720.394 - 14821.218: 87.5704% ( 105) 00:07:39.728 14821.218 - 14922.043: 88.2461% ( 96) 00:07:39.728 14922.043 - 15022.868: 88.9077% ( 94) 00:07:39.728 15022.868 - 15123.692: 89.4355% ( 75) 00:07:39.728 15123.692 - 15224.517: 90.0056% ( 81) 00:07:39.728 15224.517 - 15325.342: 90.4772% ( 67) 00:07:39.728 15325.342 - 15426.166: 90.9558% ( 68) 00:07:39.728 15426.166 - 15526.991: 91.3781% ( 60) 00:07:39.728 15526.991 - 15627.815: 91.7652% ( 55) 00:07:39.728 15627.815 - 15728.640: 92.1382% ( 53) 00:07:39.728 15728.640 - 15829.465: 92.4972% ( 51) 00:07:39.728 15829.465 - 15930.289: 92.8209% ( 46) 00:07:39.728 15930.289 - 16031.114: 93.1236% ( 43) 00:07:39.728 16031.114 - 16131.938: 93.3699% ( 35) 00:07:39.728 16131.938 - 16232.763: 93.6867% ( 45) 00:07:39.728 16232.763 - 16333.588: 93.9682% ( 40) 00:07:39.728 16333.588 - 16434.412: 94.2638% ( 42) 00:07:39.728 16434.412 - 16535.237: 94.5101% ( 35) 00:07:39.728 16535.237 - 16636.062: 94.7213% ( 30) 00:07:39.728 16636.062 - 16736.886: 94.9324% ( 30) 00:07:39.728 16736.886 - 16837.711: 95.1154% ( 26) 00:07:39.728 16837.711 - 16938.535: 95.3055% ( 27) 00:07:39.728 16938.535 - 17039.360: 95.4603% ( 22) 00:07:39.728 17039.360 - 17140.185: 95.6785% ( 31) 00:07:39.728 17140.185 - 17241.009: 95.8756% ( 28) 00:07:39.728 17241.009 - 17341.834: 96.0515% ( 25) 00:07:39.728 17341.834 - 17442.658: 96.2134% ( 23) 00:07:39.728 17442.658 - 17543.483: 96.3471% ( 19) 00:07:39.728 17543.483 - 17644.308: 96.5090% ( 23) 00:07:39.728 17644.308 - 17745.132: 96.6498% ( 20) 00:07:39.728 17745.132 - 17845.957: 96.7976% ( 21) 00:07:39.728 17845.957 - 17946.782: 96.9172% ( 17) 00:07:39.728 17946.782 - 18047.606: 97.0298% ( 16) 00:07:39.728 18047.606 - 18148.431: 97.1284% ( 14) 00:07:39.728 18148.431 - 18249.255: 97.2269% ( 14) 00:07:39.728 18249.255 - 18350.080: 97.3466% ( 17) 00:07:39.728 18350.080 - 18450.905: 97.5014% ( 22) 00:07:39.728 18450.905 - 18551.729: 97.7337% ( 33) 00:07:39.728 18551.729 - 18652.554: 97.9026% ( 24) 00:07:39.728 18652.554 - 18753.378: 98.0434% ( 20) 00:07:39.728 18753.378 - 18854.203: 98.1630% ( 17) 00:07:39.728 18854.203 - 18955.028: 98.3390% ( 25) 00:07:39.728 18955.028 - 19055.852: 98.5079% ( 24) 00:07:39.728 19055.852 - 19156.677: 98.6627% ( 22) 00:07:39.728 19156.677 - 19257.502: 98.7965% ( 19) 00:07:39.728 19257.502 - 19358.326: 98.8809% ( 12) 00:07:39.728 19358.326 - 19459.151: 98.9654% ( 12) 00:07:39.728 19459.151 - 19559.975: 99.0498% ( 12) 00:07:39.728 19559.975 - 19660.800: 99.0921% ( 6) 00:07:39.728 19660.800 - 19761.625: 99.0991% ( 1) 00:07:39.728 26819.348 - 27020.997: 99.1343% ( 5) 00:07:39.728 27020.997 - 27222.646: 99.1976% ( 9) 00:07:39.728 27222.646 - 27424.295: 99.2539% ( 8) 00:07:39.728 27424.295 - 27625.945: 99.3032% ( 7) 00:07:39.728 27625.945 - 27827.594: 99.3595% ( 8) 00:07:39.728 27827.594 - 28029.243: 99.4158% ( 8) 00:07:39.728 28029.243 - 28230.892: 99.4721% ( 8) 00:07:39.728 28230.892 - 28432.542: 99.5284% ( 8) 00:07:39.728 28432.542 - 28634.191: 99.5495% ( 3) 00:07:39.728 35288.615 - 35490.265: 99.5847% ( 5) 00:07:39.728 35490.265 - 35691.914: 99.6410% ( 8) 00:07:39.728 35691.914 - 35893.563: 99.6974% ( 8) 00:07:39.728 35893.563 - 36095.212: 99.7537% ( 8) 00:07:39.728 36095.212 - 36296.862: 99.8170% ( 9) 00:07:39.728 36296.862 - 36498.511: 99.8733% ( 8) 00:07:39.728 36498.511 - 36700.160: 99.9296% ( 8) 00:07:39.728 36700.160 - 36901.809: 99.9859% ( 8) 00:07:39.728 36901.809 - 37103.458: 100.0000% ( 2) 00:07:39.728 00:07:39.728 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.728 ============================================================================== 00:07:39.728 Range in us Cumulative IO count 00:07:39.728 5646.178 - 5671.385: 0.0070% ( 1) 00:07:39.728 5671.385 - 5696.591: 0.0771% ( 10) 00:07:39.728 5696.591 - 5721.797: 0.1612% ( 12) 00:07:39.728 5721.797 - 5747.003: 0.6166% ( 65) 00:07:39.728 5747.003 - 5772.209: 1.0650% ( 64) 00:07:39.728 5772.209 - 5797.415: 1.6606% ( 85) 00:07:39.728 5797.415 - 5822.622: 2.3613% ( 100) 00:07:39.728 5822.622 - 5847.828: 3.2581% ( 128) 00:07:39.728 5847.828 - 5873.034: 4.3442% ( 155) 00:07:39.728 5873.034 - 5898.240: 5.7876% ( 206) 00:07:39.728 5898.240 - 5923.446: 6.9577% ( 167) 00:07:39.728 5923.446 - 5948.652: 8.2259% ( 181) 00:07:39.728 5948.652 - 5973.858: 9.7604% ( 219) 00:07:39.728 5973.858 - 5999.065: 11.2318% ( 210) 00:07:39.728 5999.065 - 6024.271: 12.6401% ( 201) 00:07:39.728 6024.271 - 6049.477: 14.1256% ( 212) 00:07:39.728 6049.477 - 6074.683: 15.4989% ( 196) 00:07:39.728 6074.683 - 6099.889: 16.9843% ( 212) 00:07:39.728 6099.889 - 6125.095: 18.4767% ( 213) 00:07:39.728 6125.095 - 6150.302: 20.0252% ( 221) 00:07:39.728 6150.302 - 6175.508: 21.6158% ( 227) 00:07:39.728 6175.508 - 6200.714: 23.2063% ( 227) 00:07:39.728 6200.714 - 6225.920: 24.8809% ( 239) 00:07:39.728 6225.920 - 6251.126: 26.5135% ( 233) 00:07:39.728 6251.126 - 6276.332: 28.1811% ( 238) 00:07:39.728 6276.332 - 6301.538: 29.7996% ( 231) 00:07:39.728 6301.538 - 6326.745: 31.4742% ( 239) 00:07:39.728 6326.745 - 6351.951: 33.2049% ( 247) 00:07:39.728 6351.951 - 6377.157: 35.1107% ( 272) 00:07:39.728 6377.157 - 6402.363: 36.8694% ( 251) 00:07:39.728 6402.363 - 6427.569: 38.5930% ( 246) 00:07:39.728 6427.569 - 6452.775: 40.4288% ( 262) 00:07:39.728 6452.775 - 6503.188: 43.8551% ( 489) 00:07:39.728 6503.188 - 6553.600: 47.1202% ( 466) 00:07:39.729 6553.600 - 6604.012: 49.9580% ( 405) 00:07:39.729 6604.012 - 6654.425: 52.3122% ( 336) 00:07:39.729 6654.425 - 6704.837: 54.0429% ( 247) 00:07:39.729 6704.837 - 6755.249: 55.4232% ( 197) 00:07:39.729 6755.249 - 6805.662: 56.4952% ( 153) 00:07:39.729 6805.662 - 6856.074: 57.4201% ( 132) 00:07:39.729 6856.074 - 6906.486: 58.2119% ( 113) 00:07:39.729 6906.486 - 6956.898: 58.9406% ( 104) 00:07:39.729 6956.898 - 7007.311: 59.5852% ( 92) 00:07:39.729 7007.311 - 7057.723: 60.1878% ( 86) 00:07:39.729 7057.723 - 7108.135: 60.7343% ( 78) 00:07:39.729 7108.135 - 7158.548: 61.2878% ( 79) 00:07:39.729 7158.548 - 7208.960: 61.6732% ( 55) 00:07:39.729 7208.960 - 7259.372: 61.9605% ( 41) 00:07:39.729 7259.372 - 7309.785: 62.2057% ( 35) 00:07:39.729 7309.785 - 7360.197: 62.4299% ( 32) 00:07:39.729 7360.197 - 7410.609: 62.6541% ( 32) 00:07:39.729 7410.609 - 7461.022: 62.8784% ( 32) 00:07:39.729 7461.022 - 7511.434: 63.0746% ( 28) 00:07:39.729 7511.434 - 7561.846: 63.2427% ( 24) 00:07:39.729 7561.846 - 7612.258: 63.3828% ( 20) 00:07:39.729 7612.258 - 7662.671: 63.5160% ( 19) 00:07:39.729 7662.671 - 7713.083: 63.6281% ( 16) 00:07:39.729 7713.083 - 7763.495: 63.7262% ( 14) 00:07:39.729 7763.495 - 7813.908: 63.8243% ( 14) 00:07:39.729 7813.908 - 7864.320: 63.9504% ( 18) 00:07:39.729 7864.320 - 7914.732: 64.0415% ( 13) 00:07:39.729 7914.732 - 7965.145: 64.1115% ( 10) 00:07:39.729 7965.145 - 8015.557: 64.1886% ( 11) 00:07:39.729 8015.557 - 8065.969: 64.2657% ( 11) 00:07:39.729 8065.969 - 8116.382: 64.3988% ( 19) 00:07:39.729 8116.382 - 8166.794: 64.4899% ( 13) 00:07:39.729 8166.794 - 8217.206: 64.5810% ( 13) 00:07:39.729 8217.206 - 8267.618: 64.6721% ( 13) 00:07:39.729 8267.618 - 8318.031: 64.7912% ( 17) 00:07:39.729 8318.031 - 8368.443: 64.9524% ( 23) 00:07:39.729 8368.443 - 8418.855: 65.1766% ( 32) 00:07:39.729 8418.855 - 8469.268: 65.3868% ( 30) 00:07:39.729 8469.268 - 8519.680: 65.6110% ( 32) 00:07:39.729 8519.680 - 8570.092: 65.8562% ( 35) 00:07:39.729 8570.092 - 8620.505: 66.1015% ( 35) 00:07:39.729 8620.505 - 8670.917: 66.3187% ( 31) 00:07:39.729 8670.917 - 8721.329: 66.5359% ( 31) 00:07:39.729 8721.329 - 8771.742: 66.7391% ( 29) 00:07:39.729 8771.742 - 8822.154: 66.9703% ( 33) 00:07:39.729 8822.154 - 8872.566: 67.2015% ( 33) 00:07:39.729 8872.566 - 8922.978: 67.3837% ( 26) 00:07:39.729 8922.978 - 8973.391: 67.5729% ( 27) 00:07:39.729 8973.391 - 9023.803: 67.7831% ( 30) 00:07:39.729 9023.803 - 9074.215: 68.0283% ( 35) 00:07:39.729 9074.215 - 9124.628: 68.2035% ( 25) 00:07:39.729 9124.628 - 9175.040: 68.3786% ( 25) 00:07:39.729 9175.040 - 9225.452: 68.5468% ( 24) 00:07:39.729 9225.452 - 9275.865: 68.8271% ( 40) 00:07:39.729 9275.865 - 9326.277: 68.9882% ( 23) 00:07:39.729 9326.277 - 9376.689: 69.1003% ( 16) 00:07:39.729 9376.689 - 9427.102: 69.2475% ( 21) 00:07:39.729 9427.102 - 9477.514: 69.4226% ( 25) 00:07:39.729 9477.514 - 9527.926: 69.6188% ( 28) 00:07:39.729 9527.926 - 9578.338: 69.7870% ( 24) 00:07:39.729 9578.338 - 9628.751: 69.9271% ( 20) 00:07:39.729 9628.751 - 9679.163: 70.1163% ( 27) 00:07:39.729 9679.163 - 9729.575: 70.2915% ( 25) 00:07:39.729 9729.575 - 9779.988: 70.4737% ( 26) 00:07:39.729 9779.988 - 9830.400: 70.6979% ( 32) 00:07:39.729 9830.400 - 9880.812: 70.9571% ( 37) 00:07:39.729 9880.812 - 9931.225: 71.1813% ( 32) 00:07:39.729 9931.225 - 9981.637: 71.3915% ( 30) 00:07:39.729 9981.637 - 10032.049: 71.6017% ( 30) 00:07:39.729 10032.049 - 10082.462: 71.7629% ( 23) 00:07:39.729 10082.462 - 10132.874: 71.9591% ( 28) 00:07:39.729 10132.874 - 10183.286: 72.1132% ( 22) 00:07:39.729 10183.286 - 10233.698: 72.2884% ( 25) 00:07:39.729 10233.698 - 10284.111: 72.4566% ( 24) 00:07:39.729 10284.111 - 10334.523: 72.6037% ( 21) 00:07:39.729 10334.523 - 10384.935: 72.7578% ( 22) 00:07:39.729 10384.935 - 10435.348: 72.8910% ( 19) 00:07:39.729 10435.348 - 10485.760: 73.0521% ( 23) 00:07:39.729 10485.760 - 10536.172: 73.1993% ( 21) 00:07:39.729 10536.172 - 10586.585: 73.3394% ( 20) 00:07:39.729 10586.585 - 10636.997: 73.4936% ( 22) 00:07:39.729 10636.997 - 10687.409: 73.6197% ( 18) 00:07:39.729 10687.409 - 10737.822: 73.7178% ( 14) 00:07:39.729 10737.822 - 10788.234: 73.8089% ( 13) 00:07:39.729 10788.234 - 10838.646: 73.9070% ( 14) 00:07:39.729 10838.646 - 10889.058: 74.0050% ( 14) 00:07:39.729 10889.058 - 10939.471: 74.1031% ( 14) 00:07:39.729 10939.471 - 10989.883: 74.1942% ( 13) 00:07:39.729 10989.883 - 11040.295: 74.2853% ( 13) 00:07:39.729 11040.295 - 11090.708: 74.3624% ( 11) 00:07:39.729 11090.708 - 11141.120: 74.4325% ( 10) 00:07:39.729 11141.120 - 11191.532: 74.5025% ( 10) 00:07:39.729 11191.532 - 11241.945: 74.5726% ( 10) 00:07:39.729 11241.945 - 11292.357: 74.6567% ( 12) 00:07:39.729 11292.357 - 11342.769: 74.7758% ( 17) 00:07:39.729 11342.769 - 11393.182: 74.8739% ( 14) 00:07:39.729 11393.182 - 11443.594: 75.0000% ( 18) 00:07:39.729 11443.594 - 11494.006: 75.1051% ( 15) 00:07:39.729 11494.006 - 11544.418: 75.2032% ( 14) 00:07:39.729 11544.418 - 11594.831: 75.3083% ( 15) 00:07:39.729 11594.831 - 11645.243: 75.3924% ( 12) 00:07:39.729 11645.243 - 11695.655: 75.4484% ( 8) 00:07:39.729 11695.655 - 11746.068: 75.5535% ( 15) 00:07:39.729 11746.068 - 11796.480: 75.6586% ( 15) 00:07:39.729 11796.480 - 11846.892: 75.7848% ( 18) 00:07:39.729 11846.892 - 11897.305: 75.9039% ( 17) 00:07:39.729 11897.305 - 11947.717: 76.0230% ( 17) 00:07:39.729 11947.717 - 11998.129: 76.1771% ( 22) 00:07:39.729 11998.129 - 12048.542: 76.2962% ( 17) 00:07:39.729 12048.542 - 12098.954: 76.4154% ( 17) 00:07:39.729 12098.954 - 12149.366: 76.5345% ( 17) 00:07:39.729 12149.366 - 12199.778: 76.6326% ( 14) 00:07:39.729 12199.778 - 12250.191: 76.8498% ( 31) 00:07:39.729 12250.191 - 12300.603: 77.0530% ( 29) 00:07:39.729 12300.603 - 12351.015: 77.2211% ( 24) 00:07:39.729 12351.015 - 12401.428: 77.3473% ( 18) 00:07:39.729 12401.428 - 12451.840: 77.5504% ( 29) 00:07:39.729 12451.840 - 12502.252: 77.7116% ( 23) 00:07:39.729 12502.252 - 12552.665: 77.8868% ( 25) 00:07:39.729 12552.665 - 12603.077: 78.0689% ( 26) 00:07:39.729 12603.077 - 12653.489: 78.2231% ( 22) 00:07:39.729 12653.489 - 12703.902: 78.4263% ( 29) 00:07:39.729 12703.902 - 12754.314: 78.6155% ( 27) 00:07:39.729 12754.314 - 12804.726: 78.7906% ( 25) 00:07:39.729 12804.726 - 12855.138: 78.9798% ( 27) 00:07:39.729 12855.138 - 12905.551: 79.1970% ( 31) 00:07:39.729 12905.551 - 13006.375: 79.6665% ( 67) 00:07:39.729 13006.375 - 13107.200: 80.1079% ( 63) 00:07:39.729 13107.200 - 13208.025: 80.4793% ( 53) 00:07:39.729 13208.025 - 13308.849: 80.7805% ( 43) 00:07:39.729 13308.849 - 13409.674: 81.0888% ( 44) 00:07:39.729 13409.674 - 13510.498: 81.4112% ( 46) 00:07:39.729 13510.498 - 13611.323: 81.8736% ( 66) 00:07:39.729 13611.323 - 13712.148: 82.3781% ( 72) 00:07:39.729 13712.148 - 13812.972: 82.8545% ( 68) 00:07:39.729 13812.972 - 13913.797: 83.3240% ( 67) 00:07:39.729 13913.797 - 14014.622: 83.7304% ( 58) 00:07:39.729 14014.622 - 14115.446: 84.2138% ( 69) 00:07:39.729 14115.446 - 14216.271: 84.7534% ( 77) 00:07:39.729 14216.271 - 14317.095: 85.2929% ( 77) 00:07:39.729 14317.095 - 14417.920: 85.8114% ( 74) 00:07:39.729 14417.920 - 14518.745: 86.2598% ( 64) 00:07:39.729 14518.745 - 14619.569: 86.5751% ( 45) 00:07:39.729 14619.569 - 14720.394: 86.9184% ( 49) 00:07:39.729 14720.394 - 14821.218: 87.2197% ( 43) 00:07:39.729 14821.218 - 14922.043: 87.5701% ( 50) 00:07:39.729 14922.043 - 15022.868: 87.9204% ( 50) 00:07:39.729 15022.868 - 15123.692: 88.3058% ( 55) 00:07:39.729 15123.692 - 15224.517: 88.8383% ( 76) 00:07:39.729 15224.517 - 15325.342: 89.5179% ( 97) 00:07:39.729 15325.342 - 15426.166: 90.0224% ( 72) 00:07:39.729 15426.166 - 15526.991: 90.5059% ( 69) 00:07:39.729 15526.991 - 15627.815: 91.0104% ( 72) 00:07:39.729 15627.815 - 15728.640: 91.5008% ( 70) 00:07:39.729 15728.640 - 15829.465: 91.9283% ( 61) 00:07:39.729 15829.465 - 15930.289: 92.3346% ( 58) 00:07:39.729 15930.289 - 16031.114: 92.6640% ( 47) 00:07:39.729 16031.114 - 16131.938: 92.9232% ( 37) 00:07:39.729 16131.938 - 16232.763: 93.2315% ( 44) 00:07:39.729 16232.763 - 16333.588: 93.4908% ( 37) 00:07:39.729 16333.588 - 16434.412: 93.7850% ( 42) 00:07:39.729 16434.412 - 16535.237: 94.0723% ( 41) 00:07:39.729 16535.237 - 16636.062: 94.3386% ( 38) 00:07:39.729 16636.062 - 16736.886: 94.6258% ( 41) 00:07:39.729 16736.886 - 16837.711: 94.8711% ( 35) 00:07:39.729 16837.711 - 16938.535: 95.1093% ( 34) 00:07:39.729 16938.535 - 17039.360: 95.2845% ( 25) 00:07:39.729 17039.360 - 17140.185: 95.4737% ( 27) 00:07:39.729 17140.185 - 17241.009: 95.7189% ( 35) 00:07:39.729 17241.009 - 17341.834: 95.9081% ( 27) 00:07:39.729 17341.834 - 17442.658: 96.0552% ( 21) 00:07:39.729 17442.658 - 17543.483: 96.2514% ( 28) 00:07:39.729 17543.483 - 17644.308: 96.4126% ( 23) 00:07:39.729 17644.308 - 17745.132: 96.5247% ( 16) 00:07:39.729 17745.132 - 17845.957: 96.7349% ( 30) 00:07:39.729 17845.957 - 17946.782: 96.9941% ( 37) 00:07:39.729 17946.782 - 18047.606: 97.1763% ( 26) 00:07:39.729 18047.606 - 18148.431: 97.3655% ( 27) 00:07:39.729 18148.431 - 18249.255: 97.5126% ( 21) 00:07:39.729 18249.255 - 18350.080: 97.6387% ( 18) 00:07:39.729 18350.080 - 18450.905: 97.7649% ( 18) 00:07:39.729 18450.905 - 18551.729: 97.9470% ( 26) 00:07:39.729 18551.729 - 18652.554: 98.1993% ( 36) 00:07:39.729 18652.554 - 18753.378: 98.4025% ( 29) 00:07:39.729 18753.378 - 18854.203: 98.5916% ( 27) 00:07:39.729 18854.203 - 18955.028: 98.6897% ( 14) 00:07:39.729 18955.028 - 19055.852: 98.7878% ( 14) 00:07:39.730 19055.852 - 19156.677: 98.8649% ( 11) 00:07:39.730 19156.677 - 19257.502: 98.9350% ( 10) 00:07:39.730 19257.502 - 19358.326: 99.0191% ( 12) 00:07:39.730 19358.326 - 19459.151: 99.0961% ( 11) 00:07:39.730 19459.151 - 19559.975: 99.1031% ( 1) 00:07:39.730 19761.625 - 19862.449: 99.1172% ( 2) 00:07:39.730 19862.449 - 19963.274: 99.1382% ( 3) 00:07:39.730 19963.274 - 20064.098: 99.1662% ( 4) 00:07:39.730 20064.098 - 20164.923: 99.1942% ( 4) 00:07:39.730 20164.923 - 20265.748: 99.2223% ( 4) 00:07:39.730 20265.748 - 20366.572: 99.2573% ( 5) 00:07:39.730 20366.572 - 20467.397: 99.2853% ( 4) 00:07:39.730 20467.397 - 20568.222: 99.3133% ( 4) 00:07:39.730 20568.222 - 20669.046: 99.3414% ( 4) 00:07:39.730 20669.046 - 20769.871: 99.3764% ( 5) 00:07:39.730 20769.871 - 20870.695: 99.4044% ( 4) 00:07:39.730 20870.695 - 20971.520: 99.4325% ( 4) 00:07:39.730 20971.520 - 21072.345: 99.4605% ( 4) 00:07:39.730 21072.345 - 21173.169: 99.4885% ( 4) 00:07:39.730 21173.169 - 21273.994: 99.5165% ( 4) 00:07:39.730 21273.994 - 21374.818: 99.5516% ( 5) 00:07:39.730 26416.049 - 26617.698: 99.5726% ( 3) 00:07:39.730 26617.698 - 26819.348: 99.6286% ( 8) 00:07:39.730 26819.348 - 27020.997: 99.6847% ( 8) 00:07:39.730 27020.997 - 27222.646: 99.7408% ( 8) 00:07:39.730 27222.646 - 27424.295: 99.7968% ( 8) 00:07:39.730 27424.295 - 27625.945: 99.8599% ( 9) 00:07:39.730 27625.945 - 27827.594: 99.9159% ( 8) 00:07:39.730 27827.594 - 28029.243: 99.9720% ( 8) 00:07:39.730 28029.243 - 28230.892: 100.0000% ( 4) 00:07:39.730 00:07:39.730 11:50:37 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:40.667 Initializing NVMe Controllers 00:07:40.667 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.667 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.667 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.667 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.667 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:40.667 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:40.667 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:40.667 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:40.667 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:40.667 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:40.667 Initialization complete. Launching workers. 00:07:40.667 ======================================================== 00:07:40.667 Latency(us) 00:07:40.667 Device Information : IOPS MiB/s Average min max 00:07:40.667 PCIE (0000:00:11.0) NSID 1 from core 0: 14260.56 167.12 8988.53 5748.23 28582.13 00:07:40.667 PCIE (0000:00:13.0) NSID 1 from core 0: 14260.56 167.12 8975.25 5699.89 28357.07 00:07:40.667 PCIE (0000:00:10.0) NSID 1 from core 0: 14260.56 167.12 8960.26 5655.33 28091.22 00:07:40.667 PCIE (0000:00:12.0) NSID 1 from core 0: 14260.56 167.12 8946.14 5787.26 26654.91 00:07:40.667 PCIE (0000:00:12.0) NSID 2 from core 0: 14260.56 167.12 8932.45 5741.75 24994.15 00:07:40.667 PCIE (0000:00:12.0) NSID 3 from core 0: 14324.51 167.87 8878.87 5785.00 19510.11 00:07:40.667 ======================================================== 00:07:40.667 Total : 85627.33 1003.45 8946.87 5655.33 28582.13 00:07:40.667 00:07:40.667 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.667 ================================================================================= 00:07:40.667 1.00000% : 6125.095us 00:07:40.667 10.00000% : 6377.157us 00:07:40.667 25.00000% : 6553.600us 00:07:40.667 50.00000% : 6856.074us 00:07:40.667 75.00000% : 8267.618us 00:07:40.667 90.00000% : 18148.431us 00:07:40.667 95.00000% : 18249.255us 00:07:40.667 98.00000% : 18854.203us 00:07:40.667 99.00000% : 19358.326us 00:07:40.667 99.50000% : 25206.154us 00:07:40.667 99.90000% : 28230.892us 00:07:40.667 99.99000% : 28634.191us 00:07:40.667 99.99900% : 28634.191us 00:07:40.667 99.99990% : 28634.191us 00:07:40.667 99.99999% : 28634.191us 00:07:40.667 00:07:40.667 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.667 ================================================================================= 00:07:40.667 1.00000% : 6099.889us 00:07:40.667 10.00000% : 6377.157us 00:07:40.667 25.00000% : 6553.600us 00:07:40.667 50.00000% : 6856.074us 00:07:40.667 75.00000% : 8318.031us 00:07:40.667 90.00000% : 18047.606us 00:07:40.667 95.00000% : 18249.255us 00:07:40.667 98.00000% : 18955.028us 00:07:40.667 99.00000% : 19459.151us 00:07:40.667 99.50000% : 24097.083us 00:07:40.667 99.90000% : 26617.698us 00:07:40.667 99.99000% : 26617.698us 00:07:40.667 99.99900% : 28432.542us 00:07:40.667 99.99990% : 28432.542us 00:07:40.667 99.99999% : 28432.542us 00:07:40.667 00:07:40.667 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.667 ================================================================================= 00:07:40.667 1.00000% : 5999.065us 00:07:40.667 10.00000% : 6326.745us 00:07:40.667 25.00000% : 6553.600us 00:07:40.667 50.00000% : 6856.074us 00:07:40.667 75.00000% : 8318.031us 00:07:40.667 90.00000% : 17745.132us 00:07:40.667 95.00000% : 18551.729us 00:07:40.667 98.00000% : 19459.151us 00:07:40.667 99.00000% : 19862.449us 00:07:40.667 99.50000% : 21475.643us 00:07:40.667 99.90000% : 27625.945us 00:07:40.667 99.99000% : 28029.243us 00:07:40.667 99.99900% : 28230.892us 00:07:40.667 99.99990% : 28230.892us 00:07:40.667 99.99999% : 28230.892us 00:07:40.667 00:07:40.667 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.667 ================================================================================= 00:07:40.667 1.00000% : 6099.889us 00:07:40.667 10.00000% : 6377.157us 00:07:40.667 25.00000% : 6604.012us 00:07:40.667 50.00000% : 6856.074us 00:07:40.667 75.00000% : 8418.855us 00:07:40.667 90.00000% : 18148.431us 00:07:40.667 95.00000% : 18249.255us 00:07:40.667 98.00000% : 18753.378us 00:07:40.667 99.00000% : 18955.028us 00:07:40.667 99.50000% : 19559.975us 00:07:40.667 99.90000% : 26416.049us 00:07:40.667 99.99000% : 26819.348us 00:07:40.667 99.99900% : 26819.348us 00:07:40.667 99.99990% : 26819.348us 00:07:40.667 99.99999% : 26819.348us 00:07:40.667 00:07:40.667 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.667 ================================================================================= 00:07:40.667 1.00000% : 6125.095us 00:07:40.667 10.00000% : 6377.157us 00:07:40.667 25.00000% : 6553.600us 00:07:40.667 50.00000% : 6856.074us 00:07:40.667 75.00000% : 8368.443us 00:07:40.667 90.00000% : 18047.606us 00:07:40.667 95.00000% : 18249.255us 00:07:40.667 98.00000% : 18652.554us 00:07:40.667 99.00000% : 18955.028us 00:07:40.667 99.50000% : 19459.151us 00:07:40.667 99.90000% : 24702.031us 00:07:40.667 99.99000% : 25004.505us 00:07:40.667 99.99900% : 25004.505us 00:07:40.667 99.99990% : 25004.505us 00:07:40.667 99.99999% : 25004.505us 00:07:40.667 00:07:40.667 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.667 ================================================================================= 00:07:40.667 1.00000% : 6125.095us 00:07:40.667 10.00000% : 6377.157us 00:07:40.667 25.00000% : 6553.600us 00:07:40.667 50.00000% : 6856.074us 00:07:40.667 75.00000% : 8418.855us 00:07:40.667 90.00000% : 18047.606us 00:07:40.667 95.00000% : 18148.431us 00:07:40.667 98.00000% : 18652.554us 00:07:40.667 99.00000% : 18753.378us 00:07:40.667 99.50000% : 18955.028us 00:07:40.667 99.90000% : 19156.677us 00:07:40.667 99.99000% : 19559.975us 00:07:40.667 99.99900% : 19559.975us 00:07:40.667 99.99990% : 19559.975us 00:07:40.667 99.99999% : 19559.975us 00:07:40.667 00:07:40.667 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.667 ============================================================================== 00:07:40.667 Range in us Cumulative IO count 00:07:40.667 5747.003 - 5772.209: 0.0140% ( 2) 00:07:40.667 5797.415 - 5822.622: 0.0210% ( 1) 00:07:40.667 5873.034 - 5898.240: 0.0280% ( 1) 00:07:40.667 5898.240 - 5923.446: 0.0420% ( 2) 00:07:40.667 5923.446 - 5948.652: 0.0701% ( 4) 00:07:40.667 5948.652 - 5973.858: 0.1401% ( 10) 00:07:40.667 5973.858 - 5999.065: 0.2102% ( 10) 00:07:40.667 5999.065 - 6024.271: 0.3153% ( 15) 00:07:40.667 6024.271 - 6049.477: 0.4204% ( 15) 00:07:40.667 6049.477 - 6074.683: 0.5956% ( 25) 00:07:40.667 6074.683 - 6099.889: 0.7777% ( 26) 00:07:40.667 6099.889 - 6125.095: 1.0930% ( 45) 00:07:40.667 6125.095 - 6150.302: 1.4574% ( 52) 00:07:40.667 6150.302 - 6175.508: 2.1791% ( 103) 00:07:40.667 6175.508 - 6200.714: 2.7957% ( 88) 00:07:40.667 6200.714 - 6225.920: 3.7066% ( 130) 00:07:40.667 6225.920 - 6251.126: 4.4983% ( 113) 00:07:40.667 6251.126 - 6276.332: 5.5844% ( 155) 00:07:40.667 6276.332 - 6301.538: 6.6073% ( 146) 00:07:40.667 6301.538 - 6326.745: 8.1138% ( 215) 00:07:40.667 6326.745 - 6351.951: 9.6132% ( 214) 00:07:40.667 6351.951 - 6377.157: 11.1757% ( 223) 00:07:40.667 6377.157 - 6402.363: 12.8714% ( 242) 00:07:40.667 6402.363 - 6427.569: 14.5039% ( 233) 00:07:40.667 6427.569 - 6452.775: 16.3117% ( 258) 00:07:40.667 6452.775 - 6503.188: 20.4246% ( 587) 00:07:40.667 6503.188 - 6553.600: 25.1471% ( 674) 00:07:40.667 6553.600 - 6604.012: 30.5213% ( 767) 00:07:40.667 6604.012 - 6654.425: 34.4941% ( 567) 00:07:40.667 6654.425 - 6704.837: 40.0224% ( 789) 00:07:40.667 6704.837 - 6755.249: 44.5277% ( 643) 00:07:40.667 6755.249 - 6805.662: 49.3133% ( 683) 00:07:40.667 6805.662 - 6856.074: 53.0269% ( 530) 00:07:40.667 6856.074 - 6906.486: 56.2710% ( 463) 00:07:40.667 6906.486 - 6956.898: 58.6743% ( 343) 00:07:40.667 6956.898 - 7007.311: 60.7623% ( 298) 00:07:40.667 7007.311 - 7057.723: 62.2898% ( 218) 00:07:40.667 7057.723 - 7108.135: 63.3478% ( 151) 00:07:40.667 7108.135 - 7158.548: 63.9714% ( 89) 00:07:40.667 7158.548 - 7208.960: 64.6441% ( 96) 00:07:40.667 7208.960 - 7259.372: 65.3167% ( 96) 00:07:40.667 7259.372 - 7309.785: 66.5008% ( 169) 00:07:40.667 7309.785 - 7360.197: 67.2365% ( 105) 00:07:40.667 7360.197 - 7410.609: 67.8672% ( 90) 00:07:40.667 7410.609 - 7461.022: 68.4207% ( 79) 00:07:40.667 7461.022 - 7511.434: 69.1003% ( 97) 00:07:40.667 7511.434 - 7561.846: 69.6188% ( 74) 00:07:40.667 7561.846 - 7612.258: 70.2144% ( 85) 00:07:40.667 7612.258 - 7662.671: 70.7189% ( 72) 00:07:40.667 7662.671 - 7713.083: 71.0973% ( 54) 00:07:40.667 7713.083 - 7763.495: 71.3635% ( 38) 00:07:40.667 7763.495 - 7813.908: 71.8470% ( 69) 00:07:40.667 7813.908 - 7864.320: 72.4706% ( 89) 00:07:40.667 7864.320 - 7914.732: 72.9050% ( 62) 00:07:40.667 7914.732 - 7965.145: 73.1222% ( 31) 00:07:40.667 7965.145 - 8015.557: 73.3324% ( 30) 00:07:40.667 8015.557 - 8065.969: 73.6197% ( 41) 00:07:40.667 8065.969 - 8116.382: 74.1522% ( 76) 00:07:40.667 8116.382 - 8166.794: 74.5586% ( 58) 00:07:40.667 8166.794 - 8217.206: 74.8108% ( 36) 00:07:40.667 8217.206 - 8267.618: 75.3924% ( 83) 00:07:40.668 8267.618 - 8318.031: 75.5956% ( 29) 00:07:40.668 8318.031 - 8368.443: 75.7848% ( 27) 00:07:40.668 8368.443 - 8418.855: 75.9529% ( 24) 00:07:40.668 8418.855 - 8469.268: 76.0650% ( 16) 00:07:40.668 8469.268 - 8519.680: 76.2262% ( 23) 00:07:40.668 8519.680 - 8570.092: 76.4294% ( 29) 00:07:40.668 8570.092 - 8620.505: 76.7307% ( 43) 00:07:40.668 8620.505 - 8670.917: 76.9759% ( 35) 00:07:40.668 8670.917 - 8721.329: 77.1090% ( 19) 00:07:40.668 8721.329 - 8771.742: 77.2001% ( 13) 00:07:40.668 8771.742 - 8822.154: 77.4313% ( 33) 00:07:40.668 8822.154 - 8872.566: 77.4874% ( 8) 00:07:40.668 8872.566 - 8922.978: 77.5154% ( 4) 00:07:40.668 8922.978 - 8973.391: 77.5785% ( 9) 00:07:40.668 8973.391 - 9023.803: 77.6275% ( 7) 00:07:40.668 9023.803 - 9074.215: 77.6696% ( 6) 00:07:40.668 9074.215 - 9124.628: 77.6976% ( 4) 00:07:40.668 9124.628 - 9175.040: 77.7326% ( 5) 00:07:40.668 9175.040 - 9225.452: 77.7677% ( 5) 00:07:40.668 9225.452 - 9275.865: 77.9288% ( 23) 00:07:40.668 9275.865 - 9326.277: 78.0409% ( 16) 00:07:40.668 9326.277 - 9376.689: 78.1881% ( 21) 00:07:40.668 9376.689 - 9427.102: 78.2371% ( 7) 00:07:40.668 9427.102 - 9477.514: 78.2721% ( 5) 00:07:40.668 9477.514 - 9527.926: 78.2932% ( 3) 00:07:40.668 9527.926 - 9578.338: 78.3352% ( 6) 00:07:40.668 9578.338 - 9628.751: 78.3702% ( 5) 00:07:40.668 9628.751 - 9679.163: 78.4403% ( 10) 00:07:40.668 9679.163 - 9729.575: 78.5104% ( 10) 00:07:40.668 9729.575 - 9779.988: 78.5734% ( 9) 00:07:40.668 9779.988 - 9830.400: 78.6715% ( 14) 00:07:40.668 9830.400 - 9880.812: 78.7976% ( 18) 00:07:40.668 9880.812 - 9931.225: 79.0149% ( 31) 00:07:40.668 9931.225 - 9981.637: 79.0779% ( 9) 00:07:40.668 9981.637 - 10032.049: 79.1340% ( 8) 00:07:40.668 10032.049 - 10082.462: 79.2391% ( 15) 00:07:40.668 10082.462 - 10132.874: 79.3302% ( 13) 00:07:40.668 10132.874 - 10183.286: 79.4843% ( 22) 00:07:40.668 10183.286 - 10233.698: 79.6595% ( 25) 00:07:40.668 10233.698 - 10284.111: 79.8697% ( 30) 00:07:40.668 10284.111 - 10334.523: 80.0028% ( 19) 00:07:40.668 10334.523 - 10384.935: 80.1289% ( 18) 00:07:40.668 10384.935 - 10435.348: 80.1920% ( 9) 00:07:40.668 10435.348 - 10485.760: 80.2340% ( 6) 00:07:40.668 10485.760 - 10536.172: 80.2971% ( 9) 00:07:40.668 10536.172 - 10586.585: 80.3812% ( 12) 00:07:40.668 10586.585 - 10636.997: 80.4442% ( 9) 00:07:40.668 10636.997 - 10687.409: 80.5353% ( 13) 00:07:40.668 10687.409 - 10737.822: 80.6054% ( 10) 00:07:40.668 10737.822 - 10788.234: 80.6895% ( 12) 00:07:40.668 10788.234 - 10838.646: 80.7455% ( 8) 00:07:40.668 10838.646 - 10889.058: 80.7735% ( 4) 00:07:40.668 10889.058 - 10939.471: 80.8156% ( 6) 00:07:40.668 10939.471 - 10989.883: 80.8786% ( 9) 00:07:40.668 10989.883 - 11040.295: 80.9487% ( 10) 00:07:40.668 11040.295 - 11090.708: 81.0118% ( 9) 00:07:40.668 11090.708 - 11141.120: 81.2920% ( 40) 00:07:40.668 11141.120 - 11191.532: 81.3761% ( 12) 00:07:40.668 11191.532 - 11241.945: 81.4742% ( 14) 00:07:40.668 11241.945 - 11292.357: 81.5583% ( 12) 00:07:40.668 11292.357 - 11342.769: 81.6564% ( 14) 00:07:40.668 11342.769 - 11393.182: 81.7195% ( 9) 00:07:40.668 11393.182 - 11443.594: 81.8105% ( 13) 00:07:40.668 11443.594 - 11494.006: 81.8666% ( 8) 00:07:40.668 11494.006 - 11544.418: 81.8876% ( 3) 00:07:40.668 11544.418 - 11594.831: 81.9086% ( 3) 00:07:40.668 11594.831 - 11645.243: 81.9297% ( 3) 00:07:40.668 11645.243 - 11695.655: 81.9507% ( 3) 00:07:40.668 11695.655 - 11746.068: 81.9717% ( 3) 00:07:40.668 11746.068 - 11796.480: 81.9857% ( 2) 00:07:40.668 11796.480 - 11846.892: 81.9927% ( 1) 00:07:40.668 11846.892 - 11897.305: 82.0067% ( 2) 00:07:40.668 11897.305 - 11947.717: 82.0137% ( 1) 00:07:40.668 11947.717 - 11998.129: 82.0207% ( 1) 00:07:40.668 11998.129 - 12048.542: 82.0348% ( 2) 00:07:40.668 12048.542 - 12098.954: 82.0418% ( 1) 00:07:40.668 12098.954 - 12149.366: 82.0488% ( 1) 00:07:40.668 12149.366 - 12199.778: 82.0558% ( 1) 00:07:40.668 12199.778 - 12250.191: 82.0628% ( 1) 00:07:40.668 12653.489 - 12703.902: 82.0838% ( 3) 00:07:40.668 12703.902 - 12754.314: 82.1258% ( 6) 00:07:40.668 12754.314 - 12804.726: 82.1539% ( 4) 00:07:40.668 12804.726 - 12855.138: 82.1959% ( 6) 00:07:40.668 12855.138 - 12905.551: 82.2379% ( 6) 00:07:40.668 12905.551 - 13006.375: 82.3711% ( 19) 00:07:40.668 13006.375 - 13107.200: 82.4201% ( 7) 00:07:40.668 13107.200 - 13208.025: 82.4552% ( 5) 00:07:40.668 13208.025 - 13308.849: 82.5112% ( 8) 00:07:40.668 13308.849 - 13409.674: 82.7494% ( 34) 00:07:40.668 13409.674 - 13510.498: 82.8966% ( 21) 00:07:40.668 13510.498 - 13611.323: 83.0087% ( 16) 00:07:40.668 13611.323 - 13712.148: 83.0647% ( 8) 00:07:40.668 13712.148 - 13812.972: 83.0998% ( 5) 00:07:40.668 13812.972 - 13913.797: 83.1278% ( 4) 00:07:40.668 13913.797 - 14014.622: 83.1558% ( 4) 00:07:40.668 14014.622 - 14115.446: 83.1909% ( 5) 00:07:40.668 14115.446 - 14216.271: 83.2609% ( 10) 00:07:40.668 14216.271 - 14317.095: 83.3310% ( 10) 00:07:40.668 14317.095 - 14417.920: 83.3871% ( 8) 00:07:40.668 14417.920 - 14518.745: 83.4081% ( 3) 00:07:40.668 14922.043 - 15022.868: 83.4151% ( 1) 00:07:40.668 15022.868 - 15123.692: 83.4221% ( 1) 00:07:40.668 15123.692 - 15224.517: 83.4501% ( 4) 00:07:40.668 15224.517 - 15325.342: 83.5622% ( 16) 00:07:40.668 15325.342 - 15426.166: 83.6813% ( 17) 00:07:40.668 15426.166 - 15526.991: 83.8215% ( 20) 00:07:40.668 15526.991 - 15627.815: 84.0317% ( 30) 00:07:40.668 15627.815 - 15728.640: 84.0737% ( 6) 00:07:40.668 15728.640 - 15829.465: 84.1087% ( 5) 00:07:40.668 15829.465 - 15930.289: 84.1508% ( 6) 00:07:40.668 15930.289 - 16031.114: 84.1928% ( 6) 00:07:40.668 16031.114 - 16131.938: 84.2559% ( 9) 00:07:40.668 16131.938 - 16232.763: 84.3049% ( 7) 00:07:40.668 16232.763 - 16333.588: 84.3820% ( 11) 00:07:40.668 16333.588 - 16434.412: 84.4170% ( 5) 00:07:40.668 16434.412 - 16535.237: 84.4591% ( 6) 00:07:40.668 16535.237 - 16636.062: 84.4801% ( 3) 00:07:40.668 16636.062 - 16736.886: 84.5432% ( 9) 00:07:40.668 16736.886 - 16837.711: 84.6202% ( 11) 00:07:40.668 16837.711 - 16938.535: 84.7744% ( 22) 00:07:40.668 16938.535 - 17039.360: 84.8655% ( 13) 00:07:40.668 17039.360 - 17140.185: 84.9566% ( 13) 00:07:40.668 17140.185 - 17241.009: 85.1668% ( 30) 00:07:40.668 17241.009 - 17341.834: 85.3069% ( 20) 00:07:40.668 17341.834 - 17442.658: 85.4330% ( 18) 00:07:40.668 17442.658 - 17543.483: 85.8184% ( 55) 00:07:40.668 17543.483 - 17644.308: 86.4210% ( 86) 00:07:40.668 17644.308 - 17745.132: 87.1567% ( 105) 00:07:40.668 17745.132 - 17845.957: 87.8854% ( 104) 00:07:40.668 17845.957 - 17946.782: 88.5930% ( 101) 00:07:40.668 17946.782 - 18047.606: 89.7982% ( 172) 00:07:40.668 18047.606 - 18148.431: 94.1494% ( 621) 00:07:40.668 18148.431 - 18249.255: 95.3475% ( 171) 00:07:40.668 18249.255 - 18350.080: 95.8240% ( 68) 00:07:40.668 18350.080 - 18450.905: 96.2724% ( 64) 00:07:40.668 18450.905 - 18551.729: 96.7349% ( 66) 00:07:40.668 18551.729 - 18652.554: 97.4285% ( 99) 00:07:40.668 18652.554 - 18753.378: 97.9821% ( 79) 00:07:40.668 18753.378 - 18854.203: 98.4445% ( 66) 00:07:40.668 18854.203 - 18955.028: 98.7528% ( 44) 00:07:40.668 18955.028 - 19055.852: 98.8929% ( 20) 00:07:40.668 19055.852 - 19156.677: 98.9280% ( 5) 00:07:40.668 19156.677 - 19257.502: 98.9840% ( 8) 00:07:40.668 19257.502 - 19358.326: 99.0401% ( 8) 00:07:40.668 19358.326 - 19459.151: 99.0821% ( 6) 00:07:40.668 19459.151 - 19559.975: 99.0961% ( 2) 00:07:40.668 19963.274 - 20064.098: 99.1031% ( 1) 00:07:40.668 23693.785 - 23794.609: 99.1101% ( 1) 00:07:40.668 24601.206 - 24702.031: 99.1452% ( 5) 00:07:40.668 24702.031 - 24802.855: 99.2152% ( 10) 00:07:40.668 24802.855 - 24903.680: 99.2783% ( 9) 00:07:40.668 24903.680 - 25004.505: 99.3554% ( 11) 00:07:40.668 25004.505 - 25105.329: 99.4254% ( 10) 00:07:40.668 25105.329 - 25206.154: 99.5025% ( 11) 00:07:40.668 25206.154 - 25306.978: 99.5516% ( 7) 00:07:40.668 27424.295 - 27625.945: 99.5656% ( 2) 00:07:40.668 27625.945 - 27827.594: 99.6847% ( 17) 00:07:40.668 27827.594 - 28029.243: 99.8879% ( 29) 00:07:40.668 28029.243 - 28230.892: 99.9229% ( 5) 00:07:40.668 28230.892 - 28432.542: 99.9580% ( 5) 00:07:40.668 28432.542 - 28634.191: 100.0000% ( 6) 00:07:40.668 00:07:40.668 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.668 ============================================================================== 00:07:40.668 Range in us Cumulative IO count 00:07:40.668 5696.591 - 5721.797: 0.0070% ( 1) 00:07:40.668 5721.797 - 5747.003: 0.0140% ( 1) 00:07:40.668 5797.415 - 5822.622: 0.0280% ( 2) 00:07:40.668 5822.622 - 5847.828: 0.0350% ( 1) 00:07:40.668 5847.828 - 5873.034: 0.0561% ( 3) 00:07:40.668 5873.034 - 5898.240: 0.0701% ( 2) 00:07:40.668 5898.240 - 5923.446: 0.0911% ( 3) 00:07:40.668 5923.446 - 5948.652: 0.1051% ( 2) 00:07:40.668 5948.652 - 5973.858: 0.1682% ( 9) 00:07:40.668 5973.858 - 5999.065: 0.2522% ( 12) 00:07:40.668 5999.065 - 6024.271: 0.3643% ( 16) 00:07:40.668 6024.271 - 6049.477: 0.4835% ( 17) 00:07:40.668 6049.477 - 6074.683: 0.7147% ( 33) 00:07:40.668 6074.683 - 6099.889: 1.0090% ( 42) 00:07:40.668 6099.889 - 6125.095: 1.2962% ( 41) 00:07:40.668 6125.095 - 6150.302: 1.7447% ( 64) 00:07:40.668 6150.302 - 6175.508: 2.2492% ( 72) 00:07:40.668 6175.508 - 6200.714: 2.7887% ( 77) 00:07:40.668 6200.714 - 6225.920: 3.3983% ( 87) 00:07:40.668 6225.920 - 6251.126: 4.0359% ( 91) 00:07:40.668 6251.126 - 6276.332: 4.8697% ( 119) 00:07:40.669 6276.332 - 6301.538: 6.5653% ( 242) 00:07:40.669 6301.538 - 6326.745: 7.8195% ( 179) 00:07:40.669 6326.745 - 6351.951: 9.1087% ( 184) 00:07:40.669 6351.951 - 6377.157: 10.2719% ( 166) 00:07:40.669 6377.157 - 6402.363: 12.0796% ( 258) 00:07:40.669 6402.363 - 6427.569: 14.0905% ( 287) 00:07:40.669 6427.569 - 6452.775: 16.3817% ( 327) 00:07:40.669 6452.775 - 6503.188: 21.1253% ( 677) 00:07:40.669 6503.188 - 6553.600: 25.4414% ( 616) 00:07:40.669 6553.600 - 6604.012: 30.4442% ( 714) 00:07:40.669 6604.012 - 6654.425: 34.9636% ( 645) 00:07:40.669 6654.425 - 6704.837: 39.8963% ( 704) 00:07:40.669 6704.837 - 6755.249: 44.3246% ( 632) 00:07:40.669 6755.249 - 6805.662: 48.0591% ( 533) 00:07:40.669 6805.662 - 6856.074: 52.4453% ( 626) 00:07:40.669 6856.074 - 6906.486: 55.7525% ( 472) 00:07:40.669 6906.486 - 6956.898: 58.4571% ( 386) 00:07:40.669 6956.898 - 7007.311: 60.6923% ( 319) 00:07:40.669 7007.311 - 7057.723: 62.0025% ( 187) 00:07:40.669 7057.723 - 7108.135: 63.2427% ( 177) 00:07:40.669 7108.135 - 7158.548: 64.2377% ( 142) 00:07:40.669 7158.548 - 7208.960: 65.2186% ( 140) 00:07:40.669 7208.960 - 7259.372: 66.2626% ( 149) 00:07:40.669 7259.372 - 7309.785: 66.8232% ( 80) 00:07:40.669 7309.785 - 7360.197: 67.3206% ( 71) 00:07:40.669 7360.197 - 7410.609: 67.9512% ( 90) 00:07:40.669 7410.609 - 7461.022: 68.6799% ( 104) 00:07:40.669 7461.022 - 7511.434: 69.1634% ( 69) 00:07:40.669 7511.434 - 7561.846: 69.5418% ( 54) 00:07:40.669 7561.846 - 7612.258: 70.0322% ( 70) 00:07:40.669 7612.258 - 7662.671: 70.3125% ( 40) 00:07:40.669 7662.671 - 7713.083: 70.5157% ( 29) 00:07:40.669 7713.083 - 7763.495: 70.7960% ( 40) 00:07:40.669 7763.495 - 7813.908: 71.1183% ( 46) 00:07:40.669 7813.908 - 7864.320: 71.4756% ( 51) 00:07:40.669 7864.320 - 7914.732: 72.0011% ( 75) 00:07:40.669 7914.732 - 7965.145: 72.5126% ( 73) 00:07:40.669 7965.145 - 8015.557: 73.2203% ( 101) 00:07:40.669 8015.557 - 8065.969: 73.7528% ( 76) 00:07:40.669 8065.969 - 8116.382: 74.0751% ( 46) 00:07:40.669 8116.382 - 8166.794: 74.4675% ( 56) 00:07:40.669 8166.794 - 8217.206: 74.6917% ( 32) 00:07:40.669 8217.206 - 8267.618: 74.8739% ( 26) 00:07:40.669 8267.618 - 8318.031: 75.1121% ( 34) 00:07:40.669 8318.031 - 8368.443: 75.4554% ( 49) 00:07:40.669 8368.443 - 8418.855: 75.6376% ( 26) 00:07:40.669 8418.855 - 8469.268: 75.7918% ( 22) 00:07:40.669 8469.268 - 8519.680: 75.9319% ( 20) 00:07:40.669 8519.680 - 8570.092: 76.0580% ( 18) 00:07:40.669 8570.092 - 8620.505: 76.3453% ( 41) 00:07:40.669 8620.505 - 8670.917: 76.5905% ( 35) 00:07:40.669 8670.917 - 8721.329: 76.7026% ( 16) 00:07:40.669 8721.329 - 8771.742: 76.8848% ( 26) 00:07:40.669 8771.742 - 8822.154: 77.1371% ( 36) 00:07:40.669 8822.154 - 8872.566: 77.2351% ( 14) 00:07:40.669 8872.566 - 8922.978: 77.3262% ( 13) 00:07:40.669 8922.978 - 8973.391: 77.5224% ( 28) 00:07:40.669 8973.391 - 9023.803: 77.7607% ( 34) 00:07:40.669 9023.803 - 9074.215: 77.8868% ( 18) 00:07:40.669 9074.215 - 9124.628: 78.0269% ( 20) 00:07:40.669 9124.628 - 9175.040: 78.0830% ( 8) 00:07:40.669 9175.040 - 9225.452: 78.1670% ( 12) 00:07:40.669 9225.452 - 9275.865: 78.2651% ( 14) 00:07:40.669 9275.865 - 9326.277: 78.3492% ( 12) 00:07:40.669 9326.277 - 9376.689: 78.4543% ( 15) 00:07:40.669 9376.689 - 9427.102: 78.5034% ( 7) 00:07:40.669 9427.102 - 9477.514: 78.5454% ( 6) 00:07:40.669 9477.514 - 9527.926: 78.5804% ( 5) 00:07:40.669 9527.926 - 9578.338: 78.6365% ( 8) 00:07:40.669 9578.338 - 9628.751: 78.7066% ( 10) 00:07:40.669 9628.751 - 9679.163: 78.7906% ( 12) 00:07:40.669 9679.163 - 9729.575: 78.8887% ( 14) 00:07:40.669 9729.575 - 9779.988: 78.9658% ( 11) 00:07:40.669 9779.988 - 9830.400: 79.0709% ( 15) 00:07:40.669 9830.400 - 9880.812: 79.1760% ( 15) 00:07:40.669 9880.812 - 9931.225: 79.2671% ( 13) 00:07:40.669 9931.225 - 9981.637: 79.3442% ( 11) 00:07:40.669 9981.637 - 10032.049: 79.4913% ( 21) 00:07:40.669 10032.049 - 10082.462: 79.6385% ( 21) 00:07:40.669 10082.462 - 10132.874: 79.8346% ( 28) 00:07:40.669 10132.874 - 10183.286: 79.9467% ( 16) 00:07:40.669 10183.286 - 10233.698: 80.0028% ( 8) 00:07:40.669 10233.698 - 10284.111: 80.0659% ( 9) 00:07:40.669 10284.111 - 10334.523: 80.1359% ( 10) 00:07:40.669 10334.523 - 10384.935: 80.1990% ( 9) 00:07:40.669 10384.935 - 10435.348: 80.2270% ( 4) 00:07:40.669 10435.348 - 10485.760: 80.3251% ( 14) 00:07:40.669 10485.760 - 10536.172: 80.3952% ( 10) 00:07:40.669 10536.172 - 10586.585: 80.4933% ( 14) 00:07:40.669 10586.585 - 10636.997: 80.6404% ( 21) 00:07:40.669 10636.997 - 10687.409: 80.8366% ( 28) 00:07:40.669 10687.409 - 10737.822: 80.9908% ( 22) 00:07:40.669 10737.822 - 10788.234: 81.0468% ( 8) 00:07:40.669 10788.234 - 10838.646: 81.1099% ( 9) 00:07:40.669 10838.646 - 10889.058: 81.1659% ( 8) 00:07:40.669 10889.058 - 10939.471: 81.2500% ( 12) 00:07:40.669 10939.471 - 10989.883: 81.3061% ( 8) 00:07:40.669 10989.883 - 11040.295: 81.3621% ( 8) 00:07:40.669 11040.295 - 11090.708: 81.4041% ( 6) 00:07:40.669 11090.708 - 11141.120: 81.4462% ( 6) 00:07:40.669 11141.120 - 11191.532: 81.4882% ( 6) 00:07:40.669 11191.532 - 11241.945: 81.5303% ( 6) 00:07:40.669 11241.945 - 11292.357: 81.5793% ( 7) 00:07:40.669 11292.357 - 11342.769: 81.6143% ( 5) 00:07:40.669 11342.769 - 11393.182: 81.6494% ( 5) 00:07:40.669 11393.182 - 11443.594: 81.6704% ( 3) 00:07:40.669 11443.594 - 11494.006: 81.6914% ( 3) 00:07:40.669 11494.006 - 11544.418: 81.7195% ( 4) 00:07:40.669 11544.418 - 11594.831: 81.7335% ( 2) 00:07:40.669 11594.831 - 11645.243: 81.7545% ( 3) 00:07:40.669 11645.243 - 11695.655: 81.7685% ( 2) 00:07:40.669 11695.655 - 11746.068: 81.7895% ( 3) 00:07:40.669 11746.068 - 11796.480: 81.7965% ( 1) 00:07:40.669 11796.480 - 11846.892: 81.8175% ( 3) 00:07:40.669 11846.892 - 11897.305: 81.8246% ( 1) 00:07:40.669 11897.305 - 11947.717: 81.8596% ( 5) 00:07:40.669 11947.717 - 11998.129: 81.9297% ( 10) 00:07:40.669 11998.129 - 12048.542: 81.9717% ( 6) 00:07:40.669 12048.542 - 12098.954: 82.0348% ( 9) 00:07:40.669 12098.954 - 12149.366: 82.1048% ( 10) 00:07:40.669 12149.366 - 12199.778: 82.2099% ( 15) 00:07:40.669 12199.778 - 12250.191: 82.2800% ( 10) 00:07:40.669 12250.191 - 12300.603: 82.3220% ( 6) 00:07:40.669 12300.603 - 12351.015: 82.3501% ( 4) 00:07:40.669 12351.015 - 12401.428: 82.3781% ( 4) 00:07:40.669 12401.428 - 12451.840: 82.3921% ( 2) 00:07:40.669 12451.840 - 12502.252: 82.4061% ( 2) 00:07:40.669 12502.252 - 12552.665: 82.4201% ( 2) 00:07:40.669 12552.665 - 12603.077: 82.4341% ( 2) 00:07:40.669 12603.077 - 12653.489: 82.4482% ( 2) 00:07:40.669 12653.489 - 12703.902: 82.4552% ( 1) 00:07:40.669 12703.902 - 12754.314: 82.4692% ( 2) 00:07:40.669 12754.314 - 12804.726: 82.4832% ( 2) 00:07:40.669 12804.726 - 12855.138: 82.4972% ( 2) 00:07:40.669 12855.138 - 12905.551: 82.5112% ( 2) 00:07:40.669 12905.551 - 13006.375: 82.5392% ( 4) 00:07:40.669 13006.375 - 13107.200: 82.7214% ( 26) 00:07:40.669 13107.200 - 13208.025: 82.8615% ( 20) 00:07:40.669 13208.025 - 13308.849: 82.9036% ( 6) 00:07:40.669 13308.849 - 13409.674: 82.9456% ( 6) 00:07:40.669 13409.674 - 13510.498: 82.9596% ( 2) 00:07:40.669 14115.446 - 14216.271: 82.9666% ( 1) 00:07:40.669 14216.271 - 14317.095: 82.9877% ( 3) 00:07:40.669 14317.095 - 14417.920: 83.0227% ( 5) 00:07:40.669 14417.920 - 14518.745: 83.0507% ( 4) 00:07:40.669 14518.745 - 14619.569: 83.1278% ( 11) 00:07:40.669 14619.569 - 14720.394: 83.2820% ( 22) 00:07:40.669 14720.394 - 14821.218: 83.4221% ( 20) 00:07:40.669 14821.218 - 14922.043: 83.4992% ( 11) 00:07:40.669 14922.043 - 15022.868: 83.5973% ( 14) 00:07:40.669 15022.868 - 15123.692: 83.7024% ( 15) 00:07:40.669 15123.692 - 15224.517: 83.8145% ( 16) 00:07:40.669 15224.517 - 15325.342: 84.0036% ( 27) 00:07:40.669 15325.342 - 15426.166: 84.1298% ( 18) 00:07:40.669 15426.166 - 15526.991: 84.1648% ( 5) 00:07:40.669 15526.991 - 15627.815: 84.1998% ( 5) 00:07:40.669 15627.815 - 15728.640: 84.2138% ( 2) 00:07:40.669 15728.640 - 15829.465: 84.2279% ( 2) 00:07:40.669 15829.465 - 15930.289: 84.2419% ( 2) 00:07:40.669 15930.289 - 16031.114: 84.2629% ( 3) 00:07:40.669 16031.114 - 16131.938: 84.2769% ( 2) 00:07:40.669 16131.938 - 16232.763: 84.2909% ( 2) 00:07:40.669 16232.763 - 16333.588: 84.3049% ( 2) 00:07:40.669 16535.237 - 16636.062: 84.3330% ( 4) 00:07:40.669 16636.062 - 16736.886: 84.3750% ( 6) 00:07:40.669 16736.886 - 16837.711: 84.4311% ( 8) 00:07:40.669 16837.711 - 16938.535: 84.5432% ( 16) 00:07:40.669 16938.535 - 17039.360: 84.7253% ( 26) 00:07:40.669 17039.360 - 17140.185: 84.8024% ( 11) 00:07:40.669 17140.185 - 17241.009: 84.8795% ( 11) 00:07:40.669 17241.009 - 17341.834: 85.0476% ( 24) 00:07:40.669 17341.834 - 17442.658: 85.4680% ( 60) 00:07:40.669 17442.658 - 17543.483: 86.0987% ( 90) 00:07:40.669 17543.483 - 17644.308: 86.8344% ( 105) 00:07:40.669 17644.308 - 17745.132: 87.6612% ( 118) 00:07:40.669 17745.132 - 17845.957: 88.5370% ( 125) 00:07:40.669 17845.957 - 17946.782: 89.3498% ( 116) 00:07:40.669 17946.782 - 18047.606: 90.6600% ( 187) 00:07:40.669 18047.606 - 18148.431: 93.9812% ( 474) 00:07:40.669 18148.431 - 18249.255: 95.0533% ( 153) 00:07:40.669 18249.255 - 18350.080: 95.5367% ( 69) 00:07:40.669 18350.080 - 18450.905: 95.9992% ( 66) 00:07:40.669 18450.905 - 18551.729: 96.4266% ( 61) 00:07:40.669 18551.729 - 18652.554: 97.0291% ( 86) 00:07:40.669 18652.554 - 18753.378: 97.4215% ( 56) 00:07:40.670 18753.378 - 18854.203: 97.8209% ( 57) 00:07:40.670 18854.203 - 18955.028: 98.1572% ( 48) 00:07:40.670 18955.028 - 19055.852: 98.4445% ( 41) 00:07:40.670 19055.852 - 19156.677: 98.6617% ( 31) 00:07:40.670 19156.677 - 19257.502: 98.8299% ( 24) 00:07:40.670 19257.502 - 19358.326: 98.9420% ( 16) 00:07:40.670 19358.326 - 19459.151: 99.0121% ( 10) 00:07:40.670 19459.151 - 19559.975: 99.0471% ( 5) 00:07:40.670 19559.975 - 19660.800: 99.0681% ( 3) 00:07:40.670 19660.800 - 19761.625: 99.0891% ( 3) 00:07:40.670 19761.625 - 19862.449: 99.1031% ( 2) 00:07:40.670 23290.486 - 23391.311: 99.1101% ( 1) 00:07:40.670 23391.311 - 23492.135: 99.1802% ( 10) 00:07:40.670 23492.135 - 23592.960: 99.2223% ( 6) 00:07:40.670 23592.960 - 23693.785: 99.2853% ( 9) 00:07:40.670 23693.785 - 23794.609: 99.3554% ( 10) 00:07:40.670 23794.609 - 23895.434: 99.4254% ( 10) 00:07:40.670 23895.434 - 23996.258: 99.4815% ( 8) 00:07:40.670 23996.258 - 24097.083: 99.5516% ( 10) 00:07:40.670 26214.400 - 26416.049: 99.8459% ( 42) 00:07:40.670 26416.049 - 26617.698: 99.9930% ( 21) 00:07:40.670 28230.892 - 28432.542: 100.0000% ( 1) 00:07:40.670 00:07:40.670 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.670 ============================================================================== 00:07:40.670 Range in us Cumulative IO count 00:07:40.670 5646.178 - 5671.385: 0.0070% ( 1) 00:07:40.670 5721.797 - 5747.003: 0.0140% ( 1) 00:07:40.670 5747.003 - 5772.209: 0.0490% ( 5) 00:07:40.670 5772.209 - 5797.415: 0.1121% ( 9) 00:07:40.670 5797.415 - 5822.622: 0.1752% ( 9) 00:07:40.670 5822.622 - 5847.828: 0.2663% ( 13) 00:07:40.670 5847.828 - 5873.034: 0.3854% ( 17) 00:07:40.670 5873.034 - 5898.240: 0.5045% ( 17) 00:07:40.670 5898.240 - 5923.446: 0.5956% ( 13) 00:07:40.670 5923.446 - 5948.652: 0.7497% ( 22) 00:07:40.670 5948.652 - 5973.858: 0.9459% ( 28) 00:07:40.670 5973.858 - 5999.065: 1.1841% ( 34) 00:07:40.670 5999.065 - 6024.271: 1.5205% ( 48) 00:07:40.670 6024.271 - 6049.477: 2.0740% ( 79) 00:07:40.670 6049.477 - 6074.683: 2.7116% ( 91) 00:07:40.670 6074.683 - 6099.889: 3.3913% ( 97) 00:07:40.670 6099.889 - 6125.095: 3.9728% ( 83) 00:07:40.670 6125.095 - 6150.302: 4.6385% ( 95) 00:07:40.670 6150.302 - 6175.508: 5.1640% ( 75) 00:07:40.670 6175.508 - 6200.714: 5.7455% ( 83) 00:07:40.670 6200.714 - 6225.920: 6.3761% ( 90) 00:07:40.670 6225.920 - 6251.126: 7.1328% ( 108) 00:07:40.670 6251.126 - 6276.332: 7.9526% ( 117) 00:07:40.670 6276.332 - 6301.538: 9.0597% ( 158) 00:07:40.670 6301.538 - 6326.745: 10.2088% ( 164) 00:07:40.670 6326.745 - 6351.951: 11.2668% ( 151) 00:07:40.670 6351.951 - 6377.157: 12.8013% ( 219) 00:07:40.670 6377.157 - 6402.363: 14.3007% ( 214) 00:07:40.670 6402.363 - 6427.569: 15.9473% ( 235) 00:07:40.670 6427.569 - 6452.775: 17.6219% ( 239) 00:07:40.670 6452.775 - 6503.188: 21.4476% ( 546) 00:07:40.670 6503.188 - 6553.600: 25.8969% ( 635) 00:07:40.670 6553.600 - 6604.012: 30.2691% ( 624) 00:07:40.670 6604.012 - 6654.425: 34.6062% ( 619) 00:07:40.670 6654.425 - 6704.837: 38.9294% ( 617) 00:07:40.670 6704.837 - 6755.249: 43.0493% ( 588) 00:07:40.670 6755.249 - 6805.662: 47.1763% ( 589) 00:07:40.670 6805.662 - 6856.074: 50.3573% ( 454) 00:07:40.670 6856.074 - 6906.486: 53.1951% ( 405) 00:07:40.670 6906.486 - 6956.898: 55.6614% ( 352) 00:07:40.670 6956.898 - 7007.311: 58.0577% ( 342) 00:07:40.670 7007.311 - 7057.723: 59.7884% ( 247) 00:07:40.670 7057.723 - 7108.135: 61.2878% ( 214) 00:07:40.670 7108.135 - 7158.548: 62.8153% ( 218) 00:07:40.670 7158.548 - 7208.960: 64.3147% ( 214) 00:07:40.670 7208.960 - 7259.372: 65.4218% ( 158) 00:07:40.670 7259.372 - 7309.785: 66.3887% ( 138) 00:07:40.670 7309.785 - 7360.197: 67.0614% ( 96) 00:07:40.670 7360.197 - 7410.609: 67.7270% ( 95) 00:07:40.670 7410.609 - 7461.022: 68.2175% ( 70) 00:07:40.670 7461.022 - 7511.434: 68.5959% ( 54) 00:07:40.670 7511.434 - 7561.846: 68.9742% ( 54) 00:07:40.670 7561.846 - 7612.258: 69.5348% ( 80) 00:07:40.670 7612.258 - 7662.671: 69.9622% ( 61) 00:07:40.670 7662.671 - 7713.083: 70.3896% ( 61) 00:07:40.670 7713.083 - 7763.495: 70.8170% ( 61) 00:07:40.670 7763.495 - 7813.908: 71.2304% ( 59) 00:07:40.670 7813.908 - 7864.320: 71.7068% ( 68) 00:07:40.670 7864.320 - 7914.732: 72.1903% ( 69) 00:07:40.670 7914.732 - 7965.145: 72.5757% ( 55) 00:07:40.670 7965.145 - 8015.557: 72.8700% ( 42) 00:07:40.670 8015.557 - 8065.969: 73.2203% ( 50) 00:07:40.670 8065.969 - 8116.382: 73.5916% ( 53) 00:07:40.670 8116.382 - 8166.794: 74.0261% ( 62) 00:07:40.670 8166.794 - 8217.206: 74.4184% ( 56) 00:07:40.670 8217.206 - 8267.618: 74.8038% ( 55) 00:07:40.670 8267.618 - 8318.031: 75.2592% ( 65) 00:07:40.670 8318.031 - 8368.443: 75.6096% ( 50) 00:07:40.670 8368.443 - 8418.855: 75.7848% ( 25) 00:07:40.670 8418.855 - 8469.268: 76.0230% ( 34) 00:07:40.670 8469.268 - 8519.680: 76.2122% ( 27) 00:07:40.670 8519.680 - 8570.092: 76.3733% ( 23) 00:07:40.670 8570.092 - 8620.505: 76.5975% ( 32) 00:07:40.670 8620.505 - 8670.917: 76.9619% ( 52) 00:07:40.670 8670.917 - 8721.329: 77.1090% ( 21) 00:07:40.670 8721.329 - 8771.742: 77.2211% ( 16) 00:07:40.670 8771.742 - 8822.154: 77.4383% ( 31) 00:07:40.670 8822.154 - 8872.566: 77.5925% ( 22) 00:07:40.670 8872.566 - 8922.978: 77.6836% ( 13) 00:07:40.670 8922.978 - 8973.391: 77.8237% ( 20) 00:07:40.670 8973.391 - 9023.803: 77.9358% ( 16) 00:07:40.670 9023.803 - 9074.215: 78.0339% ( 14) 00:07:40.670 9074.215 - 9124.628: 78.1180% ( 12) 00:07:40.670 9124.628 - 9175.040: 78.2021% ( 12) 00:07:40.670 9175.040 - 9225.452: 78.3142% ( 16) 00:07:40.670 9225.452 - 9275.865: 78.3562% ( 6) 00:07:40.670 9275.865 - 9326.277: 78.4333% ( 11) 00:07:40.670 9326.277 - 9376.689: 78.5034% ( 10) 00:07:40.670 9376.689 - 9427.102: 78.6085% ( 15) 00:07:40.670 9427.102 - 9477.514: 78.6715% ( 9) 00:07:40.670 9477.514 - 9527.926: 78.7206% ( 7) 00:07:40.670 9527.926 - 9578.338: 78.7976% ( 11) 00:07:40.670 9578.338 - 9628.751: 78.8607% ( 9) 00:07:40.670 9628.751 - 9679.163: 78.9238% ( 9) 00:07:40.670 9679.163 - 9729.575: 79.0078% ( 12) 00:07:40.670 9729.575 - 9779.988: 79.1340% ( 18) 00:07:40.670 9779.988 - 9830.400: 79.1760% ( 6) 00:07:40.670 9830.400 - 9880.812: 79.3021% ( 18) 00:07:40.670 9880.812 - 9931.225: 79.4142% ( 16) 00:07:40.670 9931.225 - 9981.637: 79.5544% ( 20) 00:07:40.670 9981.637 - 10032.049: 79.6945% ( 20) 00:07:40.670 10032.049 - 10082.462: 79.8627% ( 24) 00:07:40.670 10082.462 - 10132.874: 79.9538% ( 13) 00:07:40.670 10132.874 - 10183.286: 80.0238% ( 10) 00:07:40.670 10183.286 - 10233.698: 80.1009% ( 11) 00:07:40.670 10233.698 - 10284.111: 80.1710% ( 10) 00:07:40.670 10284.111 - 10334.523: 80.2410% ( 10) 00:07:40.670 10334.523 - 10384.935: 80.2971% ( 8) 00:07:40.670 10384.935 - 10435.348: 80.3531% ( 8) 00:07:40.670 10435.348 - 10485.760: 80.4092% ( 8) 00:07:40.670 10485.760 - 10536.172: 80.4863% ( 11) 00:07:40.670 10536.172 - 10586.585: 80.5563% ( 10) 00:07:40.670 10586.585 - 10636.997: 80.6264% ( 10) 00:07:40.670 10636.997 - 10687.409: 80.6754% ( 7) 00:07:40.670 10687.409 - 10737.822: 80.7595% ( 12) 00:07:40.670 10737.822 - 10788.234: 80.8226% ( 9) 00:07:40.670 10788.234 - 10838.646: 80.8646% ( 6) 00:07:40.670 10838.646 - 10889.058: 80.9487% ( 12) 00:07:40.670 10889.058 - 10939.471: 81.0188% ( 10) 00:07:40.670 10939.471 - 10989.883: 81.1239% ( 15) 00:07:40.670 10989.883 - 11040.295: 81.1589% ( 5) 00:07:40.670 11040.295 - 11090.708: 81.2220% ( 9) 00:07:40.670 11090.708 - 11141.120: 81.3201% ( 14) 00:07:40.670 11141.120 - 11191.532: 81.4112% ( 13) 00:07:40.670 11191.532 - 11241.945: 81.4952% ( 12) 00:07:40.670 11241.945 - 11292.357: 81.5863% ( 13) 00:07:40.670 11292.357 - 11342.769: 81.6564% ( 10) 00:07:40.670 11342.769 - 11393.182: 81.6984% ( 6) 00:07:40.670 11393.182 - 11443.594: 81.7615% ( 9) 00:07:40.670 11443.594 - 11494.006: 81.7965% ( 5) 00:07:40.670 11494.006 - 11544.418: 81.8316% ( 5) 00:07:40.670 11544.418 - 11594.831: 81.8456% ( 2) 00:07:40.670 11594.831 - 11645.243: 81.8666% ( 3) 00:07:40.670 11645.243 - 11695.655: 81.8806% ( 2) 00:07:40.670 11695.655 - 11746.068: 81.8946% ( 2) 00:07:40.670 11746.068 - 11796.480: 81.9016% ( 1) 00:07:40.670 11796.480 - 11846.892: 81.9297% ( 4) 00:07:40.670 11846.892 - 11897.305: 81.9437% ( 2) 00:07:40.670 11897.305 - 11947.717: 81.9787% ( 5) 00:07:40.670 11947.717 - 11998.129: 82.0137% ( 5) 00:07:40.670 11998.129 - 12048.542: 82.0277% ( 2) 00:07:40.670 12048.542 - 12098.954: 82.0488% ( 3) 00:07:40.670 12098.954 - 12149.366: 82.1118% ( 9) 00:07:40.670 12149.366 - 12199.778: 82.1539% ( 6) 00:07:40.670 12199.778 - 12250.191: 82.1959% ( 6) 00:07:40.670 12250.191 - 12300.603: 82.2590% ( 9) 00:07:40.670 12300.603 - 12351.015: 82.3150% ( 8) 00:07:40.670 12351.015 - 12401.428: 82.3571% ( 6) 00:07:40.670 12401.428 - 12451.840: 82.3781% ( 3) 00:07:40.670 12451.840 - 12502.252: 82.3991% ( 3) 00:07:40.670 12502.252 - 12552.665: 82.4341% ( 5) 00:07:40.670 12552.665 - 12603.077: 82.4622% ( 4) 00:07:40.670 12603.077 - 12653.489: 82.4902% ( 4) 00:07:40.670 12653.489 - 12703.902: 82.5042% ( 2) 00:07:40.670 12703.902 - 12754.314: 82.5182% ( 2) 00:07:40.670 12754.314 - 12804.726: 82.5392% ( 3) 00:07:40.670 12804.726 - 12855.138: 82.5673% ( 4) 00:07:40.670 12855.138 - 12905.551: 82.5813% ( 2) 00:07:40.670 12905.551 - 13006.375: 82.6233% ( 6) 00:07:40.670 13006.375 - 13107.200: 82.7074% ( 12) 00:07:40.670 13107.200 - 13208.025: 82.7635% ( 8) 00:07:40.671 13208.025 - 13308.849: 82.8055% ( 6) 00:07:40.671 13308.849 - 13409.674: 82.8545% ( 7) 00:07:40.671 13409.674 - 13510.498: 82.8756% ( 3) 00:07:40.671 13510.498 - 13611.323: 82.8826% ( 1) 00:07:40.671 13611.323 - 13712.148: 82.9456% ( 9) 00:07:40.671 13712.148 - 13812.972: 83.1488% ( 29) 00:07:40.671 13812.972 - 13913.797: 83.1979% ( 7) 00:07:40.671 13913.797 - 14014.622: 83.2189% ( 3) 00:07:40.671 14014.622 - 14115.446: 83.2469% ( 4) 00:07:40.671 14115.446 - 14216.271: 83.2749% ( 4) 00:07:40.671 14216.271 - 14317.095: 83.2890% ( 2) 00:07:40.671 14317.095 - 14417.920: 83.2960% ( 1) 00:07:40.671 14417.920 - 14518.745: 83.3310% ( 5) 00:07:40.671 14518.745 - 14619.569: 83.3871% ( 8) 00:07:40.671 14619.569 - 14720.394: 83.5692% ( 26) 00:07:40.671 14720.394 - 14821.218: 83.5902% ( 3) 00:07:40.671 14821.218 - 14922.043: 83.6113% ( 3) 00:07:40.671 14922.043 - 15022.868: 83.6253% ( 2) 00:07:40.671 15022.868 - 15123.692: 83.6743% ( 7) 00:07:40.671 15123.692 - 15224.517: 83.7234% ( 7) 00:07:40.671 15224.517 - 15325.342: 83.7724% ( 7) 00:07:40.671 15325.342 - 15426.166: 83.8215% ( 7) 00:07:40.671 15426.166 - 15526.991: 83.8565% ( 5) 00:07:40.671 15526.991 - 15627.815: 83.9126% ( 8) 00:07:40.671 15627.815 - 15728.640: 83.9476% ( 5) 00:07:40.671 15728.640 - 15829.465: 83.9826% ( 5) 00:07:40.671 15829.465 - 15930.289: 84.0317% ( 7) 00:07:40.671 15930.289 - 16031.114: 84.0877% ( 8) 00:07:40.671 16031.114 - 16131.938: 84.1858% ( 14) 00:07:40.671 16131.938 - 16232.763: 84.2769% ( 13) 00:07:40.671 16232.763 - 16333.588: 84.3820% ( 15) 00:07:40.671 16333.588 - 16434.412: 84.4170% ( 5) 00:07:40.671 16434.412 - 16535.237: 84.4661% ( 7) 00:07:40.671 16535.237 - 16636.062: 84.5081% ( 6) 00:07:40.671 16636.062 - 16736.886: 84.5572% ( 7) 00:07:40.671 16736.886 - 16837.711: 84.6483% ( 13) 00:07:40.671 16837.711 - 16938.535: 84.7253% ( 11) 00:07:40.671 16938.535 - 17039.360: 84.7814% ( 8) 00:07:40.671 17039.360 - 17140.185: 84.8935% ( 16) 00:07:40.671 17140.185 - 17241.009: 85.1177% ( 32) 00:07:40.671 17241.009 - 17341.834: 85.5101% ( 56) 00:07:40.671 17341.834 - 17442.658: 86.3649% ( 122) 00:07:40.671 17442.658 - 17543.483: 87.4720% ( 158) 00:07:40.671 17543.483 - 17644.308: 88.9364% ( 209) 00:07:40.671 17644.308 - 17745.132: 90.2326% ( 185) 00:07:40.671 17745.132 - 17845.957: 91.2976% ( 152) 00:07:40.671 17845.957 - 17946.782: 92.1945% ( 128) 00:07:40.671 17946.782 - 18047.606: 93.0353% ( 120) 00:07:40.671 18047.606 - 18148.431: 93.5118% ( 68) 00:07:40.671 18148.431 - 18249.255: 93.9672% ( 65) 00:07:40.671 18249.255 - 18350.080: 94.5207% ( 79) 00:07:40.671 18350.080 - 18450.905: 94.9411% ( 60) 00:07:40.671 18450.905 - 18551.729: 95.2705% ( 47) 00:07:40.671 18551.729 - 18652.554: 95.6558% ( 55) 00:07:40.671 18652.554 - 18753.378: 95.9781% ( 46) 00:07:40.671 18753.378 - 18854.203: 96.3145% ( 48) 00:07:40.671 18854.203 - 18955.028: 96.6648% ( 50) 00:07:40.671 18955.028 - 19055.852: 96.9661% ( 43) 00:07:40.671 19055.852 - 19156.677: 97.2534% ( 41) 00:07:40.671 19156.677 - 19257.502: 97.6177% ( 52) 00:07:40.671 19257.502 - 19358.326: 97.9540% ( 48) 00:07:40.671 19358.326 - 19459.151: 98.2623% ( 44) 00:07:40.671 19459.151 - 19559.975: 98.4795% ( 31) 00:07:40.671 19559.975 - 19660.800: 98.6547% ( 25) 00:07:40.671 19660.800 - 19761.625: 98.9140% ( 37) 00:07:40.671 19761.625 - 19862.449: 99.0331% ( 17) 00:07:40.671 19862.449 - 19963.274: 99.0401% ( 1) 00:07:40.671 19963.274 - 20064.098: 99.1031% ( 9) 00:07:40.671 20064.098 - 20164.923: 99.1312% ( 4) 00:07:40.671 20164.923 - 20265.748: 99.1732% ( 6) 00:07:40.671 20265.748 - 20366.572: 99.1872% ( 2) 00:07:40.671 20366.572 - 20467.397: 99.2363% ( 7) 00:07:40.671 20467.397 - 20568.222: 99.2573% ( 3) 00:07:40.671 20568.222 - 20669.046: 99.2993% ( 6) 00:07:40.671 20669.046 - 20769.871: 99.3484% ( 7) 00:07:40.671 20769.871 - 20870.695: 99.3694% ( 3) 00:07:40.671 20870.695 - 20971.520: 99.4114% ( 6) 00:07:40.671 20971.520 - 21072.345: 99.4325% ( 3) 00:07:40.671 21072.345 - 21173.169: 99.4605% ( 4) 00:07:40.671 21173.169 - 21273.994: 99.4815% ( 3) 00:07:40.671 21273.994 - 21374.818: 99.4955% ( 2) 00:07:40.671 21374.818 - 21475.643: 99.5165% ( 3) 00:07:40.671 21475.643 - 21576.468: 99.5516% ( 5) 00:07:40.671 24399.557 - 24500.382: 99.5586% ( 1) 00:07:40.671 24500.382 - 24601.206: 99.5656% ( 1) 00:07:40.671 24601.206 - 24702.031: 99.5726% ( 1) 00:07:40.671 24802.855 - 24903.680: 99.5796% ( 1) 00:07:40.671 24903.680 - 25004.505: 99.5866% ( 1) 00:07:40.671 25004.505 - 25105.329: 99.5936% ( 1) 00:07:40.671 25206.154 - 25306.978: 99.6006% ( 1) 00:07:40.671 25306.978 - 25407.803: 99.6076% ( 1) 00:07:40.671 25407.803 - 25508.628: 99.6216% ( 2) 00:07:40.671 25508.628 - 25609.452: 99.6286% ( 1) 00:07:40.671 25710.277 - 25811.102: 99.6357% ( 1) 00:07:40.671 25811.102 - 26012.751: 99.6637% ( 4) 00:07:40.671 26012.751 - 26214.400: 99.6987% ( 5) 00:07:40.671 26214.400 - 26416.049: 99.7337% ( 5) 00:07:40.671 26416.049 - 26617.698: 99.7618% ( 4) 00:07:40.671 26617.698 - 26819.348: 99.7968% ( 5) 00:07:40.671 26819.348 - 27020.997: 99.8248% ( 4) 00:07:40.671 27020.997 - 27222.646: 99.8599% ( 5) 00:07:40.671 27222.646 - 27424.295: 99.8879% ( 4) 00:07:40.671 27424.295 - 27625.945: 99.9229% ( 5) 00:07:40.671 27625.945 - 27827.594: 99.9510% ( 4) 00:07:40.671 27827.594 - 28029.243: 99.9930% ( 6) 00:07:40.671 28029.243 - 28230.892: 100.0000% ( 1) 00:07:40.671 00:07:40.671 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.671 ============================================================================== 00:07:40.671 Range in us Cumulative IO count 00:07:40.671 5772.209 - 5797.415: 0.0140% ( 2) 00:07:40.671 5797.415 - 5822.622: 0.0210% ( 1) 00:07:40.671 5822.622 - 5847.828: 0.0350% ( 2) 00:07:40.671 5847.828 - 5873.034: 0.0561% ( 3) 00:07:40.671 5873.034 - 5898.240: 0.1121% ( 8) 00:07:40.671 5898.240 - 5923.446: 0.1752% ( 9) 00:07:40.671 5923.446 - 5948.652: 0.2102% ( 5) 00:07:40.671 5948.652 - 5973.858: 0.2733% ( 9) 00:07:40.671 5973.858 - 5999.065: 0.3153% ( 6) 00:07:40.671 5999.065 - 6024.271: 0.3924% ( 11) 00:07:40.671 6024.271 - 6049.477: 0.5045% ( 16) 00:07:40.671 6049.477 - 6074.683: 0.7707% ( 38) 00:07:40.671 6074.683 - 6099.889: 1.0580% ( 41) 00:07:40.671 6099.889 - 6125.095: 1.3733% ( 45) 00:07:40.671 6125.095 - 6150.302: 2.1511% ( 111) 00:07:40.671 6150.302 - 6175.508: 2.9919% ( 120) 00:07:40.671 6175.508 - 6200.714: 3.6365% ( 92) 00:07:40.671 6200.714 - 6225.920: 4.2811% ( 92) 00:07:40.671 6225.920 - 6251.126: 4.8697% ( 84) 00:07:40.671 6251.126 - 6276.332: 5.6825% ( 116) 00:07:40.671 6276.332 - 6301.538: 6.6634% ( 140) 00:07:40.671 6301.538 - 6326.745: 7.8405% ( 168) 00:07:40.671 6326.745 - 6351.951: 9.4240% ( 226) 00:07:40.671 6351.951 - 6377.157: 11.2458% ( 260) 00:07:40.671 6377.157 - 6402.363: 12.8153% ( 224) 00:07:40.671 6402.363 - 6427.569: 14.5039% ( 241) 00:07:40.671 6427.569 - 6452.775: 15.8913% ( 198) 00:07:40.671 6452.775 - 6503.188: 20.2214% ( 618) 00:07:40.671 6503.188 - 6553.600: 24.5586% ( 619) 00:07:40.671 6553.600 - 6604.012: 29.8627% ( 757) 00:07:40.671 6604.012 - 6654.425: 35.1948% ( 761) 00:07:40.671 6654.425 - 6704.837: 39.1746% ( 568) 00:07:40.671 6704.837 - 6755.249: 43.7010% ( 646) 00:07:40.671 6755.249 - 6805.662: 48.3885% ( 669) 00:07:40.671 6805.662 - 6856.074: 52.0390% ( 521) 00:07:40.671 6856.074 - 6906.486: 55.5633% ( 503) 00:07:40.671 6906.486 - 6956.898: 58.9616% ( 485) 00:07:40.671 6956.898 - 7007.311: 61.4280% ( 352) 00:07:40.671 7007.311 - 7057.723: 62.9344% ( 215) 00:07:40.671 7057.723 - 7108.135: 64.2377% ( 186) 00:07:40.671 7108.135 - 7158.548: 65.1836% ( 135) 00:07:40.671 7158.548 - 7208.960: 65.8212% ( 91) 00:07:40.671 7208.960 - 7259.372: 66.3817% ( 80) 00:07:40.671 7259.372 - 7309.785: 66.9633% ( 83) 00:07:40.671 7309.785 - 7360.197: 67.5799% ( 88) 00:07:40.671 7360.197 - 7410.609: 67.9092% ( 47) 00:07:40.671 7410.609 - 7461.022: 68.2876% ( 54) 00:07:40.671 7461.022 - 7511.434: 68.6589% ( 53) 00:07:40.671 7511.434 - 7561.846: 69.1003% ( 63) 00:07:40.671 7561.846 - 7612.258: 69.4857% ( 55) 00:07:40.672 7612.258 - 7662.671: 70.0533% ( 81) 00:07:40.672 7662.671 - 7713.083: 70.4386% ( 55) 00:07:40.672 7713.083 - 7763.495: 70.9781% ( 77) 00:07:40.672 7763.495 - 7813.908: 71.4546% ( 68) 00:07:40.672 7813.908 - 7864.320: 71.6718% ( 31) 00:07:40.672 7864.320 - 7914.732: 72.0291% ( 51) 00:07:40.672 7914.732 - 7965.145: 72.2674% ( 34) 00:07:40.672 7965.145 - 8015.557: 72.5126% ( 35) 00:07:40.672 8015.557 - 8065.969: 72.7368% ( 32) 00:07:40.672 8065.969 - 8116.382: 72.9470% ( 30) 00:07:40.672 8116.382 - 8166.794: 73.3674% ( 60) 00:07:40.672 8166.794 - 8217.206: 73.5706% ( 29) 00:07:40.672 8217.206 - 8267.618: 73.9910% ( 60) 00:07:40.672 8267.618 - 8318.031: 74.3274% ( 48) 00:07:40.672 8318.031 - 8368.443: 74.9089% ( 83) 00:07:40.672 8368.443 - 8418.855: 75.5395% ( 90) 00:07:40.672 8418.855 - 8469.268: 76.2542% ( 102) 00:07:40.672 8469.268 - 8519.680: 76.9479% ( 99) 00:07:40.672 8519.680 - 8570.092: 77.4033% ( 65) 00:07:40.672 8570.092 - 8620.505: 77.6345% ( 33) 00:07:40.672 8620.505 - 8670.917: 77.9288% ( 42) 00:07:40.672 8670.917 - 8721.329: 78.0339% ( 15) 00:07:40.672 8721.329 - 8771.742: 78.1390% ( 15) 00:07:40.672 8771.742 - 8822.154: 78.2371% ( 14) 00:07:40.672 8822.154 - 8872.566: 78.3282% ( 13) 00:07:40.672 8872.566 - 8922.978: 78.4193% ( 13) 00:07:40.672 8922.978 - 8973.391: 78.6295% ( 30) 00:07:40.672 8973.391 - 9023.803: 78.6996% ( 10) 00:07:40.672 9023.803 - 9074.215: 78.7206% ( 3) 00:07:40.672 9074.215 - 9124.628: 78.7626% ( 6) 00:07:40.672 9124.628 - 9175.040: 78.7976% ( 5) 00:07:40.672 9175.040 - 9225.452: 78.8327% ( 5) 00:07:40.672 9225.452 - 9275.865: 78.8747% ( 6) 00:07:40.672 9275.865 - 9326.277: 78.9098% ( 5) 00:07:40.672 9326.277 - 9376.689: 78.9378% ( 4) 00:07:40.672 9376.689 - 9427.102: 78.9658% ( 4) 00:07:40.672 9427.102 - 9477.514: 78.9798% ( 2) 00:07:40.672 9477.514 - 9527.926: 79.0008% ( 3) 00:07:40.672 9527.926 - 9578.338: 79.0289% ( 4) 00:07:40.672 9578.338 - 9628.751: 79.0499% ( 3) 00:07:40.672 9628.751 - 9679.163: 79.1129% ( 9) 00:07:40.672 9679.163 - 9729.575: 79.2110% ( 14) 00:07:40.672 9729.575 - 9779.988: 79.3372% ( 18) 00:07:40.672 9779.988 - 9830.400: 79.3792% ( 6) 00:07:40.672 9830.400 - 9880.812: 79.4423% ( 9) 00:07:40.672 9880.812 - 9931.225: 79.4633% ( 3) 00:07:40.672 9931.225 - 9981.637: 79.5754% ( 16) 00:07:40.672 9981.637 - 10032.049: 79.7365% ( 23) 00:07:40.672 10032.049 - 10082.462: 79.7506% ( 2) 00:07:40.672 10082.462 - 10132.874: 79.7926% ( 6) 00:07:40.672 10132.874 - 10183.286: 79.8206% ( 4) 00:07:40.672 10183.286 - 10233.698: 79.8697% ( 7) 00:07:40.672 10233.698 - 10284.111: 79.9117% ( 6) 00:07:40.672 10284.111 - 10334.523: 79.9678% ( 8) 00:07:40.672 10334.523 - 10384.935: 80.1289% ( 23) 00:07:40.672 10384.935 - 10435.348: 80.2200% ( 13) 00:07:40.672 10435.348 - 10485.760: 80.2901% ( 10) 00:07:40.672 10485.760 - 10536.172: 80.3461% ( 8) 00:07:40.672 10536.172 - 10586.585: 80.3742% ( 4) 00:07:40.672 10586.585 - 10636.997: 80.4302% ( 8) 00:07:40.672 10636.997 - 10687.409: 80.5143% ( 12) 00:07:40.672 10687.409 - 10737.822: 80.5774% ( 9) 00:07:40.672 10737.822 - 10788.234: 80.6124% ( 5) 00:07:40.672 10788.234 - 10838.646: 80.6334% ( 3) 00:07:40.672 10838.646 - 10889.058: 80.6754% ( 6) 00:07:40.672 10889.058 - 10939.471: 80.6895% ( 2) 00:07:40.672 10939.471 - 10989.883: 80.7105% ( 3) 00:07:40.672 10989.883 - 11040.295: 80.7455% ( 5) 00:07:40.672 11040.295 - 11090.708: 80.7735% ( 4) 00:07:40.672 11090.708 - 11141.120: 80.7876% ( 2) 00:07:40.672 11141.120 - 11191.532: 80.8296% ( 6) 00:07:40.672 11191.532 - 11241.945: 80.8506% ( 3) 00:07:40.672 11241.945 - 11292.357: 80.8857% ( 5) 00:07:40.672 11292.357 - 11342.769: 80.9487% ( 9) 00:07:40.672 11342.769 - 11393.182: 81.0188% ( 10) 00:07:40.672 11393.182 - 11443.594: 81.1029% ( 12) 00:07:40.672 11443.594 - 11494.006: 81.1939% ( 13) 00:07:40.672 11494.006 - 11544.418: 81.2850% ( 13) 00:07:40.672 11544.418 - 11594.831: 81.4812% ( 28) 00:07:40.672 11594.831 - 11645.243: 81.6774% ( 28) 00:07:40.672 11645.243 - 11695.655: 81.8316% ( 22) 00:07:40.672 11695.655 - 11746.068: 81.9717% ( 20) 00:07:40.672 11746.068 - 11796.480: 82.0348% ( 9) 00:07:40.672 11796.480 - 11846.892: 82.0628% ( 4) 00:07:40.672 11846.892 - 11897.305: 82.0978% ( 5) 00:07:40.672 11897.305 - 11947.717: 82.1258% ( 4) 00:07:40.672 11947.717 - 11998.129: 82.1469% ( 3) 00:07:40.672 11998.129 - 12048.542: 82.1819% ( 5) 00:07:40.672 12048.542 - 12098.954: 82.2520% ( 10) 00:07:40.672 12098.954 - 12149.366: 82.3430% ( 13) 00:07:40.672 12149.366 - 12199.778: 82.4411% ( 14) 00:07:40.672 12199.778 - 12250.191: 82.4832% ( 6) 00:07:40.672 12250.191 - 12300.603: 82.4972% ( 2) 00:07:40.672 12300.603 - 12351.015: 82.5112% ( 2) 00:07:40.672 12905.551 - 13006.375: 82.5182% ( 1) 00:07:40.672 13006.375 - 13107.200: 82.6163% ( 14) 00:07:40.672 13107.200 - 13208.025: 82.7214% ( 15) 00:07:40.672 13208.025 - 13308.849: 82.7985% ( 11) 00:07:40.672 13308.849 - 13409.674: 82.8335% ( 5) 00:07:40.672 13409.674 - 13510.498: 82.8826% ( 7) 00:07:40.672 13510.498 - 13611.323: 82.9246% ( 6) 00:07:40.672 13611.323 - 13712.148: 82.9666% ( 6) 00:07:40.672 13712.148 - 13812.972: 83.0227% ( 8) 00:07:40.672 13812.972 - 13913.797: 83.0717% ( 7) 00:07:40.672 13913.797 - 14014.622: 83.1138% ( 6) 00:07:40.672 14014.622 - 14115.446: 83.1558% ( 6) 00:07:40.672 14115.446 - 14216.271: 83.1909% ( 5) 00:07:40.672 14216.271 - 14317.095: 83.2469% ( 8) 00:07:40.672 14317.095 - 14417.920: 83.3450% ( 14) 00:07:40.672 14417.920 - 14518.745: 83.4431% ( 14) 00:07:40.672 14518.745 - 14619.569: 83.5202% ( 11) 00:07:40.672 14619.569 - 14720.394: 83.5832% ( 9) 00:07:40.672 14720.394 - 14821.218: 83.6813% ( 14) 00:07:40.672 14821.218 - 14922.043: 83.7584% ( 11) 00:07:40.672 14922.043 - 15022.868: 83.7724% ( 2) 00:07:40.672 15022.868 - 15123.692: 83.7864% ( 2) 00:07:40.672 15123.692 - 15224.517: 83.8004% ( 2) 00:07:40.672 15224.517 - 15325.342: 83.8215% ( 3) 00:07:40.672 15325.342 - 15426.166: 83.8425% ( 3) 00:07:40.672 15426.166 - 15526.991: 83.8565% ( 2) 00:07:40.672 15930.289 - 16031.114: 83.8635% ( 1) 00:07:40.672 16031.114 - 16131.938: 83.8915% ( 4) 00:07:40.672 16131.938 - 16232.763: 83.9196% ( 4) 00:07:40.672 16232.763 - 16333.588: 83.9896% ( 10) 00:07:40.672 16333.588 - 16434.412: 84.1998% ( 30) 00:07:40.672 16434.412 - 16535.237: 84.2629% ( 9) 00:07:40.672 16535.237 - 16636.062: 84.2979% ( 5) 00:07:40.672 16636.062 - 16736.886: 84.3470% ( 7) 00:07:40.672 16736.886 - 16837.711: 84.4030% ( 8) 00:07:40.672 16837.711 - 16938.535: 84.4451% ( 6) 00:07:40.672 16938.535 - 17039.360: 84.5151% ( 10) 00:07:40.672 17039.360 - 17140.185: 84.6342% ( 17) 00:07:40.672 17140.185 - 17241.009: 84.8725% ( 34) 00:07:40.672 17241.009 - 17341.834: 85.1037% ( 33) 00:07:40.672 17341.834 - 17442.658: 85.2508% ( 21) 00:07:40.672 17442.658 - 17543.483: 85.5942% ( 49) 00:07:40.672 17543.483 - 17644.308: 86.2038% ( 87) 00:07:40.672 17644.308 - 17745.132: 86.8764% ( 96) 00:07:40.672 17745.132 - 17845.957: 87.6612% ( 112) 00:07:40.672 17845.957 - 17946.782: 88.5930% ( 133) 00:07:40.672 17946.782 - 18047.606: 89.7281% ( 162) 00:07:40.672 18047.606 - 18148.431: 94.2405% ( 644) 00:07:40.672 18148.431 - 18249.255: 95.6208% ( 197) 00:07:40.672 18249.255 - 18350.080: 96.0973% ( 68) 00:07:40.672 18350.080 - 18450.905: 96.6368% ( 77) 00:07:40.672 18450.905 - 18551.729: 97.1272% ( 70) 00:07:40.672 18551.729 - 18652.554: 97.7438% ( 88) 00:07:40.672 18652.554 - 18753.378: 98.2483% ( 72) 00:07:40.672 18753.378 - 18854.203: 98.6757% ( 61) 00:07:40.672 18854.203 - 18955.028: 99.0401% ( 52) 00:07:40.672 18955.028 - 19055.852: 99.1802% ( 20) 00:07:40.672 19055.852 - 19156.677: 99.2433% ( 9) 00:07:40.672 19156.677 - 19257.502: 99.3484% ( 15) 00:07:40.672 19257.502 - 19358.326: 99.4114% ( 9) 00:07:40.672 19358.326 - 19459.151: 99.4605% ( 7) 00:07:40.672 19459.151 - 19559.975: 99.5025% ( 6) 00:07:40.672 19559.975 - 19660.800: 99.5305% ( 4) 00:07:40.672 19660.800 - 19761.625: 99.5516% ( 3) 00:07:40.672 24601.206 - 24702.031: 99.5726% ( 3) 00:07:40.672 24702.031 - 24802.855: 99.5866% ( 2) 00:07:40.672 24802.855 - 24903.680: 99.6006% ( 2) 00:07:40.672 24903.680 - 25004.505: 99.6216% ( 3) 00:07:40.672 25004.505 - 25105.329: 99.6357% ( 2) 00:07:40.672 25105.329 - 25206.154: 99.6497% ( 2) 00:07:40.672 25206.154 - 25306.978: 99.6707% ( 3) 00:07:40.672 25306.978 - 25407.803: 99.6847% ( 2) 00:07:40.672 25407.803 - 25508.628: 99.6987% ( 2) 00:07:40.672 25508.628 - 25609.452: 99.7197% ( 3) 00:07:40.672 25609.452 - 25710.277: 99.7337% ( 2) 00:07:40.672 25710.277 - 25811.102: 99.7618% ( 4) 00:07:40.672 25811.102 - 26012.751: 99.8178% ( 8) 00:07:40.672 26012.751 - 26214.400: 99.8739% ( 8) 00:07:40.672 26214.400 - 26416.049: 99.9299% ( 8) 00:07:40.672 26416.049 - 26617.698: 99.9860% ( 8) 00:07:40.672 26617.698 - 26819.348: 100.0000% ( 2) 00:07:40.672 00:07:40.672 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.672 ============================================================================== 00:07:40.672 Range in us Cumulative IO count 00:07:40.672 5721.797 - 5747.003: 0.0070% ( 1) 00:07:40.672 5797.415 - 5822.622: 0.0140% ( 1) 00:07:40.672 5847.828 - 5873.034: 0.0210% ( 1) 00:07:40.672 5873.034 - 5898.240: 0.0280% ( 1) 00:07:40.672 5898.240 - 5923.446: 0.0490% ( 3) 00:07:40.672 5923.446 - 5948.652: 0.0911% ( 6) 00:07:40.673 5948.652 - 5973.858: 0.1822% ( 13) 00:07:40.673 5973.858 - 5999.065: 0.2733% ( 13) 00:07:40.673 5999.065 - 6024.271: 0.3924% ( 17) 00:07:40.673 6024.271 - 6049.477: 0.5675% ( 25) 00:07:40.673 6049.477 - 6074.683: 0.7357% ( 24) 00:07:40.673 6074.683 - 6099.889: 0.9459% ( 30) 00:07:40.673 6099.889 - 6125.095: 1.1841% ( 34) 00:07:40.673 6125.095 - 6150.302: 1.5555% ( 53) 00:07:40.673 6150.302 - 6175.508: 2.2982% ( 106) 00:07:40.673 6175.508 - 6200.714: 3.1250% ( 118) 00:07:40.673 6200.714 - 6225.920: 3.6085% ( 69) 00:07:40.673 6225.920 - 6251.126: 4.3652% ( 108) 00:07:40.673 6251.126 - 6276.332: 5.5774% ( 173) 00:07:40.673 6276.332 - 6301.538: 6.7124% ( 162) 00:07:40.673 6301.538 - 6326.745: 8.2469% ( 219) 00:07:40.673 6326.745 - 6351.951: 9.8445% ( 228) 00:07:40.673 6351.951 - 6377.157: 11.4980% ( 236) 00:07:40.673 6377.157 - 6402.363: 13.8733% ( 339) 00:07:40.673 6402.363 - 6427.569: 15.7791% ( 272) 00:07:40.673 6427.569 - 6452.775: 17.3627% ( 226) 00:07:40.673 6452.775 - 6503.188: 21.7279% ( 623) 00:07:40.673 6503.188 - 6553.600: 26.0510% ( 617) 00:07:40.673 6553.600 - 6604.012: 30.2971% ( 606) 00:07:40.673 6604.012 - 6654.425: 34.9496% ( 664) 00:07:40.673 6654.425 - 6704.837: 38.8383% ( 555) 00:07:40.673 6704.837 - 6755.249: 43.4487% ( 658) 00:07:40.673 6755.249 - 6805.662: 48.4865% ( 719) 00:07:40.673 6805.662 - 6856.074: 52.7747% ( 612) 00:07:40.673 6856.074 - 6906.486: 56.7685% ( 570) 00:07:40.673 6906.486 - 6956.898: 59.3960% ( 375) 00:07:40.673 6956.898 - 7007.311: 60.9445% ( 221) 00:07:40.673 7007.311 - 7057.723: 62.6822% ( 248) 00:07:40.673 7057.723 - 7108.135: 63.9504% ( 181) 00:07:40.673 7108.135 - 7158.548: 64.6090% ( 94) 00:07:40.673 7158.548 - 7208.960: 65.2046% ( 85) 00:07:40.673 7208.960 - 7259.372: 65.9193% ( 102) 00:07:40.673 7259.372 - 7309.785: 66.5359% ( 88) 00:07:40.673 7309.785 - 7360.197: 66.9142% ( 54) 00:07:40.673 7360.197 - 7410.609: 67.4047% ( 70) 00:07:40.673 7410.609 - 7461.022: 67.9232% ( 74) 00:07:40.673 7461.022 - 7511.434: 68.5048% ( 83) 00:07:40.673 7511.434 - 7561.846: 69.4507% ( 135) 00:07:40.673 7561.846 - 7612.258: 70.0673% ( 88) 00:07:40.673 7612.258 - 7662.671: 70.3125% ( 35) 00:07:40.673 7662.671 - 7713.083: 70.5858% ( 39) 00:07:40.673 7713.083 - 7763.495: 71.0202% ( 62) 00:07:40.673 7763.495 - 7813.908: 71.4336% ( 59) 00:07:40.673 7813.908 - 7864.320: 72.1342% ( 100) 00:07:40.673 7864.320 - 7914.732: 72.5476% ( 59) 00:07:40.673 7914.732 - 7965.145: 72.6668% ( 17) 00:07:40.673 7965.145 - 8015.557: 72.8279% ( 23) 00:07:40.673 8015.557 - 8065.969: 72.9610% ( 19) 00:07:40.673 8065.969 - 8116.382: 73.1362% ( 25) 00:07:40.673 8116.382 - 8166.794: 73.4235% ( 41) 00:07:40.673 8166.794 - 8217.206: 73.8929% ( 67) 00:07:40.673 8217.206 - 8267.618: 74.4114% ( 74) 00:07:40.673 8267.618 - 8318.031: 74.8459% ( 62) 00:07:40.673 8318.031 - 8368.443: 75.1261% ( 40) 00:07:40.673 8368.443 - 8418.855: 75.4064% ( 40) 00:07:40.673 8418.855 - 8469.268: 75.8548% ( 64) 00:07:40.673 8469.268 - 8519.680: 76.3803% ( 75) 00:07:40.673 8519.680 - 8570.092: 76.6536% ( 39) 00:07:40.673 8570.092 - 8620.505: 77.0039% ( 50) 00:07:40.673 8620.505 - 8670.917: 77.3052% ( 43) 00:07:40.673 8670.917 - 8721.329: 77.4453% ( 20) 00:07:40.673 8721.329 - 8771.742: 77.6485% ( 29) 00:07:40.673 8771.742 - 8822.154: 77.9008% ( 36) 00:07:40.673 8822.154 - 8872.566: 78.0129% ( 16) 00:07:40.673 8872.566 - 8922.978: 78.1250% ( 16) 00:07:40.673 8922.978 - 8973.391: 78.2371% ( 16) 00:07:40.673 8973.391 - 9023.803: 78.5174% ( 40) 00:07:40.673 9023.803 - 9074.215: 78.6575% ( 20) 00:07:40.673 9074.215 - 9124.628: 78.7346% ( 11) 00:07:40.673 9124.628 - 9175.040: 78.7906% ( 8) 00:07:40.673 9175.040 - 9225.452: 78.8187% ( 4) 00:07:40.673 9225.452 - 9275.865: 78.8537% ( 5) 00:07:40.673 9275.865 - 9326.277: 78.8887% ( 5) 00:07:40.673 9326.277 - 9376.689: 78.9238% ( 5) 00:07:40.673 9376.689 - 9427.102: 78.9588% ( 5) 00:07:40.673 9427.102 - 9477.514: 79.0078% ( 7) 00:07:40.673 9477.514 - 9527.926: 79.0499% ( 6) 00:07:40.673 9527.926 - 9578.338: 79.0849% ( 5) 00:07:40.673 9578.338 - 9628.751: 79.2180% ( 19) 00:07:40.673 9628.751 - 9679.163: 79.2671% ( 7) 00:07:40.673 9679.163 - 9729.575: 79.3161% ( 7) 00:07:40.673 9729.575 - 9779.988: 79.3722% ( 8) 00:07:40.673 9779.988 - 9830.400: 79.4283% ( 8) 00:07:40.673 9830.400 - 9880.812: 79.4773% ( 7) 00:07:40.673 9880.812 - 9931.225: 79.5123% ( 5) 00:07:40.673 9931.225 - 9981.637: 79.7155% ( 29) 00:07:40.673 9981.637 - 10032.049: 79.7365% ( 3) 00:07:40.673 10032.049 - 10082.462: 79.7436% ( 1) 00:07:40.673 10082.462 - 10132.874: 79.7716% ( 4) 00:07:40.673 10132.874 - 10183.286: 79.8487% ( 11) 00:07:40.673 10183.286 - 10233.698: 79.9117% ( 9) 00:07:40.673 10233.698 - 10284.111: 79.9748% ( 9) 00:07:40.673 10284.111 - 10334.523: 80.0589% ( 12) 00:07:40.673 10334.523 - 10384.935: 80.2130% ( 22) 00:07:40.673 10384.935 - 10435.348: 80.4793% ( 38) 00:07:40.673 10435.348 - 10485.760: 80.5213% ( 6) 00:07:40.673 10485.760 - 10536.172: 80.5563% ( 5) 00:07:40.673 10536.172 - 10586.585: 80.5914% ( 5) 00:07:40.673 10586.585 - 10636.997: 80.6054% ( 2) 00:07:40.673 10636.997 - 10687.409: 80.6124% ( 1) 00:07:40.673 10687.409 - 10737.822: 80.6264% ( 2) 00:07:40.673 10737.822 - 10788.234: 80.6404% ( 2) 00:07:40.673 10788.234 - 10838.646: 80.6544% ( 2) 00:07:40.673 10838.646 - 10889.058: 80.6684% ( 2) 00:07:40.673 10889.058 - 10939.471: 80.6825% ( 2) 00:07:40.673 10939.471 - 10989.883: 80.6965% ( 2) 00:07:40.673 10989.883 - 11040.295: 80.7035% ( 1) 00:07:40.673 11040.295 - 11090.708: 80.7175% ( 2) 00:07:40.673 11292.357 - 11342.769: 80.7245% ( 1) 00:07:40.673 11342.769 - 11393.182: 80.7525% ( 4) 00:07:40.673 11393.182 - 11443.594: 80.8296% ( 11) 00:07:40.673 11443.594 - 11494.006: 80.8997% ( 10) 00:07:40.673 11494.006 - 11544.418: 81.0959% ( 28) 00:07:40.673 11544.418 - 11594.831: 81.1869% ( 13) 00:07:40.673 11594.831 - 11645.243: 81.2570% ( 10) 00:07:40.673 11645.243 - 11695.655: 81.3551% ( 14) 00:07:40.673 11695.655 - 11746.068: 81.4112% ( 8) 00:07:40.673 11746.068 - 11796.480: 81.4462% ( 5) 00:07:40.673 11796.480 - 11846.892: 81.4882% ( 6) 00:07:40.673 11846.892 - 11897.305: 81.5303% ( 6) 00:07:40.673 11897.305 - 11947.717: 81.5863% ( 8) 00:07:40.673 11947.717 - 11998.129: 81.6354% ( 7) 00:07:40.673 11998.129 - 12048.542: 81.6844% ( 7) 00:07:40.673 12048.542 - 12098.954: 81.7195% ( 5) 00:07:40.673 12098.954 - 12149.366: 81.7615% ( 6) 00:07:40.673 12149.366 - 12199.778: 81.7965% ( 5) 00:07:40.673 12199.778 - 12250.191: 81.8456% ( 7) 00:07:40.673 12250.191 - 12300.603: 81.9367% ( 13) 00:07:40.673 12300.603 - 12351.015: 82.0137% ( 11) 00:07:40.673 12351.015 - 12401.428: 82.1399% ( 18) 00:07:40.673 12401.428 - 12451.840: 82.2730% ( 19) 00:07:40.673 12451.840 - 12502.252: 82.3501% ( 11) 00:07:40.673 12502.252 - 12552.665: 82.4552% ( 15) 00:07:40.673 12552.665 - 12603.077: 82.5603% ( 15) 00:07:40.673 12603.077 - 12653.489: 82.6724% ( 16) 00:07:40.673 12653.489 - 12703.902: 82.7284% ( 8) 00:07:40.673 12703.902 - 12754.314: 82.7915% ( 9) 00:07:40.673 12754.314 - 12804.726: 82.8055% ( 2) 00:07:40.673 12804.726 - 12855.138: 82.8195% ( 2) 00:07:40.673 12855.138 - 12905.551: 82.8265% ( 1) 00:07:40.673 12905.551 - 13006.375: 82.8475% ( 3) 00:07:40.673 13006.375 - 13107.200: 82.8615% ( 2) 00:07:40.673 13107.200 - 13208.025: 82.8826% ( 3) 00:07:40.673 13208.025 - 13308.849: 82.8966% ( 2) 00:07:40.673 13308.849 - 13409.674: 82.9176% ( 3) 00:07:40.673 13409.674 - 13510.498: 82.9386% ( 3) 00:07:40.673 13510.498 - 13611.323: 82.9526% ( 2) 00:07:40.673 13611.323 - 13712.148: 82.9666% ( 2) 00:07:40.673 13812.972 - 13913.797: 82.9737% ( 1) 00:07:40.673 13913.797 - 14014.622: 83.0227% ( 7) 00:07:40.673 14014.622 - 14115.446: 83.0998% ( 11) 00:07:40.673 14115.446 - 14216.271: 83.2399% ( 20) 00:07:40.673 14216.271 - 14317.095: 83.3240% ( 12) 00:07:40.673 14317.095 - 14417.920: 83.3871% ( 9) 00:07:40.673 14417.920 - 14518.745: 83.4431% ( 8) 00:07:40.673 14518.745 - 14619.569: 83.4922% ( 7) 00:07:40.673 14619.569 - 14720.394: 83.5482% ( 8) 00:07:40.673 14720.394 - 14821.218: 83.6323% ( 12) 00:07:40.673 14821.218 - 14922.043: 83.7374% ( 15) 00:07:40.673 14922.043 - 15022.868: 83.7724% ( 5) 00:07:40.673 15022.868 - 15123.692: 83.8004% ( 4) 00:07:40.673 15123.692 - 15224.517: 83.8215% ( 3) 00:07:40.673 15224.517 - 15325.342: 83.8495% ( 4) 00:07:40.673 15325.342 - 15426.166: 83.8565% ( 1) 00:07:40.673 15526.991 - 15627.815: 83.8775% ( 3) 00:07:40.673 15627.815 - 15728.640: 83.9196% ( 6) 00:07:40.673 15728.640 - 15829.465: 83.9476% ( 4) 00:07:40.673 15829.465 - 15930.289: 83.9826% ( 5) 00:07:40.673 15930.289 - 16031.114: 84.0177% ( 5) 00:07:40.673 16031.114 - 16131.938: 84.1508% ( 19) 00:07:40.673 16131.938 - 16232.763: 84.1718% ( 3) 00:07:40.673 16232.763 - 16333.588: 84.1858% ( 2) 00:07:40.673 16333.588 - 16434.412: 84.1998% ( 2) 00:07:40.673 16434.412 - 16535.237: 84.2209% ( 3) 00:07:40.673 16535.237 - 16636.062: 84.3119% ( 13) 00:07:40.673 16636.062 - 16736.886: 84.4521% ( 20) 00:07:40.673 16736.886 - 16837.711: 84.6413% ( 27) 00:07:40.673 16837.711 - 16938.535: 84.7323% ( 13) 00:07:40.673 16938.535 - 17039.360: 84.8024% ( 10) 00:07:40.673 17039.360 - 17140.185: 84.9005% ( 14) 00:07:40.673 17140.185 - 17241.009: 84.9986% ( 14) 00:07:40.673 17241.009 - 17341.834: 85.0897% ( 13) 00:07:40.674 17341.834 - 17442.658: 85.2719% ( 26) 00:07:40.674 17442.658 - 17543.483: 85.6923% ( 60) 00:07:40.674 17543.483 - 17644.308: 86.3859% ( 99) 00:07:40.674 17644.308 - 17745.132: 87.2478% ( 123) 00:07:40.674 17745.132 - 17845.957: 88.0956% ( 121) 00:07:40.674 17845.957 - 17946.782: 89.3007% ( 172) 00:07:40.674 17946.782 - 18047.606: 90.3097% ( 144) 00:07:40.674 18047.606 - 18148.431: 94.8150% ( 643) 00:07:40.674 18148.431 - 18249.255: 95.9501% ( 162) 00:07:40.674 18249.255 - 18350.080: 96.4476% ( 71) 00:07:40.674 18350.080 - 18450.905: 96.9100% ( 66) 00:07:40.674 18450.905 - 18551.729: 97.3515% ( 63) 00:07:40.674 18551.729 - 18652.554: 98.1923% ( 120) 00:07:40.674 18652.554 - 18753.378: 98.6197% ( 61) 00:07:40.674 18753.378 - 18854.203: 98.9840% ( 52) 00:07:40.674 18854.203 - 18955.028: 99.2503% ( 38) 00:07:40.674 18955.028 - 19055.852: 99.3554% ( 15) 00:07:40.674 19055.852 - 19156.677: 99.3974% ( 6) 00:07:40.674 19156.677 - 19257.502: 99.4535% ( 8) 00:07:40.674 19257.502 - 19358.326: 99.4745% ( 3) 00:07:40.674 19358.326 - 19459.151: 99.5235% ( 7) 00:07:40.674 19459.151 - 19559.975: 99.5376% ( 2) 00:07:40.674 19862.449 - 19963.274: 99.5446% ( 1) 00:07:40.674 19963.274 - 20064.098: 99.5516% ( 1) 00:07:40.674 23290.486 - 23391.311: 99.5726% ( 3) 00:07:40.674 23391.311 - 23492.135: 99.6006% ( 4) 00:07:40.674 23492.135 - 23592.960: 99.6286% ( 4) 00:07:40.674 23592.960 - 23693.785: 99.6567% ( 4) 00:07:40.674 23693.785 - 23794.609: 99.6847% ( 4) 00:07:40.674 23794.609 - 23895.434: 99.7057% ( 3) 00:07:40.674 23895.434 - 23996.258: 99.7337% ( 4) 00:07:40.674 23996.258 - 24097.083: 99.7548% ( 3) 00:07:40.674 24097.083 - 24197.908: 99.7828% ( 4) 00:07:40.674 24197.908 - 24298.732: 99.8108% ( 4) 00:07:40.674 24298.732 - 24399.557: 99.8388% ( 4) 00:07:40.674 24399.557 - 24500.382: 99.8599% ( 3) 00:07:40.674 24500.382 - 24601.206: 99.8879% ( 4) 00:07:40.674 24601.206 - 24702.031: 99.9159% ( 4) 00:07:40.674 24702.031 - 24802.855: 99.9439% ( 4) 00:07:40.674 24802.855 - 24903.680: 99.9720% ( 4) 00:07:40.674 24903.680 - 25004.505: 100.0000% ( 4) 00:07:40.674 00:07:40.674 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.674 ============================================================================== 00:07:40.674 Range in us Cumulative IO count 00:07:40.674 5772.209 - 5797.415: 0.0140% ( 2) 00:07:40.674 5797.415 - 5822.622: 0.0209% ( 1) 00:07:40.674 5822.622 - 5847.828: 0.0279% ( 1) 00:07:40.674 5873.034 - 5898.240: 0.0419% ( 2) 00:07:40.674 5898.240 - 5923.446: 0.0698% ( 4) 00:07:40.674 5923.446 - 5948.652: 0.0977% ( 4) 00:07:40.674 5948.652 - 5973.858: 0.1674% ( 10) 00:07:40.674 5973.858 - 5999.065: 0.2162% ( 7) 00:07:40.674 5999.065 - 6024.271: 0.3209% ( 15) 00:07:40.674 6024.271 - 6049.477: 0.5022% ( 26) 00:07:40.674 6049.477 - 6074.683: 0.6975% ( 28) 00:07:40.674 6074.683 - 6099.889: 0.8580% ( 23) 00:07:40.674 6099.889 - 6125.095: 1.2626% ( 58) 00:07:40.674 6125.095 - 6150.302: 1.6532% ( 56) 00:07:40.674 6150.302 - 6175.508: 2.1694% ( 74) 00:07:40.674 6175.508 - 6200.714: 2.7553% ( 84) 00:07:40.674 6200.714 - 6225.920: 3.4947% ( 106) 00:07:40.674 6225.920 - 6251.126: 4.3945% ( 129) 00:07:40.674 6251.126 - 6276.332: 5.5525% ( 166) 00:07:40.674 6276.332 - 6301.538: 6.8708% ( 189) 00:07:40.674 6301.538 - 6326.745: 8.2659% ( 200) 00:07:40.674 6326.745 - 6351.951: 9.8145% ( 222) 00:07:40.674 6351.951 - 6377.157: 11.7467% ( 277) 00:07:40.674 6377.157 - 6402.363: 14.1253% ( 341) 00:07:40.674 6402.363 - 6427.569: 15.7715% ( 236) 00:07:40.674 6427.569 - 6452.775: 17.4805% ( 245) 00:07:40.674 6452.775 - 6503.188: 21.9169% ( 636) 00:07:40.674 6503.188 - 6553.600: 26.2486% ( 621) 00:07:40.674 6553.600 - 6604.012: 30.3781% ( 592) 00:07:40.674 6604.012 - 6654.425: 34.4448% ( 583) 00:07:40.674 6654.425 - 6704.837: 39.2160% ( 684) 00:07:40.674 6704.837 - 6755.249: 43.6175% ( 631) 00:07:40.674 6755.249 - 6805.662: 48.4166% ( 688) 00:07:40.674 6805.662 - 6856.074: 52.3996% ( 571) 00:07:40.674 6856.074 - 6906.486: 55.6152% ( 461) 00:07:40.674 6906.486 - 6956.898: 58.0497% ( 349) 00:07:40.674 6956.898 - 7007.311: 60.5469% ( 358) 00:07:40.674 7007.311 - 7057.723: 62.4093% ( 267) 00:07:40.674 7057.723 - 7108.135: 63.2952% ( 127) 00:07:40.674 7108.135 - 7158.548: 64.1392% ( 121) 00:07:40.674 7158.548 - 7208.960: 64.8368% ( 100) 00:07:40.674 7208.960 - 7259.372: 65.3530% ( 74) 00:07:40.674 7259.372 - 7309.785: 65.8622% ( 73) 00:07:40.674 7309.785 - 7360.197: 66.1900% ( 47) 00:07:40.674 7360.197 - 7410.609: 66.7062% ( 74) 00:07:40.674 7410.609 - 7461.022: 67.3340% ( 90) 00:07:40.674 7461.022 - 7511.434: 68.1362% ( 115) 00:07:40.674 7511.434 - 7561.846: 68.8965% ( 109) 00:07:40.674 7561.846 - 7612.258: 69.7335% ( 120) 00:07:40.674 7612.258 - 7662.671: 70.5078% ( 111) 00:07:40.674 7662.671 - 7713.083: 70.9124% ( 58) 00:07:40.674 7713.083 - 7763.495: 71.2333% ( 46) 00:07:40.674 7763.495 - 7813.908: 71.4634% ( 33) 00:07:40.674 7813.908 - 7864.320: 71.9866% ( 75) 00:07:40.674 7864.320 - 7914.732: 72.2935% ( 44) 00:07:40.674 7914.732 - 7965.145: 72.6144% ( 46) 00:07:40.674 7965.145 - 8015.557: 72.9492% ( 48) 00:07:40.674 8015.557 - 8065.969: 73.1794% ( 33) 00:07:40.674 8065.969 - 8116.382: 73.4235% ( 35) 00:07:40.674 8116.382 - 8166.794: 73.8560% ( 62) 00:07:40.674 8166.794 - 8217.206: 74.0932% ( 34) 00:07:40.674 8217.206 - 8267.618: 74.4141% ( 46) 00:07:40.674 8267.618 - 8318.031: 74.6094% ( 28) 00:07:40.674 8318.031 - 8368.443: 74.7907% ( 26) 00:07:40.674 8368.443 - 8418.855: 75.1256% ( 48) 00:07:40.674 8418.855 - 8469.268: 75.3767% ( 36) 00:07:40.674 8469.268 - 8519.680: 75.6348% ( 37) 00:07:40.674 8519.680 - 8570.092: 75.8022% ( 24) 00:07:40.674 8570.092 - 8620.505: 76.0254% ( 32) 00:07:40.674 8620.505 - 8670.917: 76.4300% ( 58) 00:07:40.674 8670.917 - 8721.329: 76.9531% ( 75) 00:07:40.674 8721.329 - 8771.742: 77.1903% ( 34) 00:07:40.674 8771.742 - 8822.154: 77.3228% ( 19) 00:07:40.674 8822.154 - 8872.566: 77.4344% ( 16) 00:07:40.674 8872.566 - 8922.978: 77.5112% ( 11) 00:07:40.674 8922.978 - 8973.391: 77.5670% ( 8) 00:07:40.674 8973.391 - 9023.803: 77.6367% ( 10) 00:07:40.674 9023.803 - 9074.215: 77.6646% ( 4) 00:07:40.674 9074.215 - 9124.628: 77.7065% ( 6) 00:07:40.674 9124.628 - 9175.040: 77.7483% ( 6) 00:07:40.674 9175.040 - 9225.452: 77.8041% ( 8) 00:07:40.674 9225.452 - 9275.865: 77.8320% ( 4) 00:07:40.674 9275.865 - 9326.277: 77.8599% ( 4) 00:07:40.674 9326.277 - 9376.689: 77.8878% ( 4) 00:07:40.674 9376.689 - 9427.102: 77.9227% ( 5) 00:07:40.674 9427.102 - 9477.514: 77.9436% ( 3) 00:07:40.674 9477.514 - 9527.926: 78.0273% ( 12) 00:07:40.674 9527.926 - 9578.338: 78.1250% ( 14) 00:07:40.674 9578.338 - 9628.751: 78.2506% ( 18) 00:07:40.674 9628.751 - 9679.163: 78.4528% ( 29) 00:07:40.674 9679.163 - 9729.575: 78.7807% ( 47) 00:07:40.674 9729.575 - 9779.988: 79.2550% ( 68) 00:07:40.674 9779.988 - 9830.400: 79.3736% ( 17) 00:07:40.674 9830.400 - 9880.812: 79.4922% ( 17) 00:07:40.674 9880.812 - 9931.225: 79.5759% ( 12) 00:07:40.674 9931.225 - 9981.637: 79.6526% ( 11) 00:07:40.674 9981.637 - 10032.049: 79.7154% ( 9) 00:07:40.674 10032.049 - 10082.462: 79.7991% ( 12) 00:07:40.674 10082.462 - 10132.874: 79.8549% ( 8) 00:07:40.674 10132.874 - 10183.286: 79.9107% ( 8) 00:07:40.674 10183.286 - 10233.698: 79.9874% ( 11) 00:07:40.674 10233.698 - 10284.111: 80.0432% ( 8) 00:07:40.674 10284.111 - 10334.523: 80.1758% ( 19) 00:07:40.674 10334.523 - 10384.935: 80.2874% ( 16) 00:07:40.674 10384.935 - 10435.348: 80.3571% ( 10) 00:07:40.674 10435.348 - 10485.760: 80.4339% ( 11) 00:07:40.674 10485.760 - 10536.172: 80.4897% ( 8) 00:07:40.674 10536.172 - 10586.585: 80.6641% ( 25) 00:07:40.674 10586.585 - 10636.997: 80.7617% ( 14) 00:07:40.674 10636.997 - 10687.409: 80.7896% ( 4) 00:07:40.674 10687.409 - 10737.822: 80.8245% ( 5) 00:07:40.674 10737.822 - 10788.234: 80.8594% ( 5) 00:07:40.674 10788.234 - 10838.646: 80.8733% ( 2) 00:07:40.674 10838.646 - 10889.058: 80.9222% ( 7) 00:07:40.674 10889.058 - 10939.471: 80.9640% ( 6) 00:07:40.674 10939.471 - 10989.883: 81.0128% ( 7) 00:07:40.674 10989.883 - 11040.295: 81.0617% ( 7) 00:07:40.674 11040.295 - 11090.708: 81.1035% ( 6) 00:07:40.674 11090.708 - 11141.120: 81.1454% ( 6) 00:07:40.674 11141.120 - 11191.532: 81.2500% ( 15) 00:07:40.674 11191.532 - 11241.945: 81.2919% ( 6) 00:07:40.674 11241.945 - 11292.357: 81.3198% ( 4) 00:07:40.674 11292.357 - 11342.769: 81.3477% ( 4) 00:07:40.674 11342.769 - 11393.182: 81.4035% ( 8) 00:07:40.674 11393.182 - 11443.594: 81.4662% ( 9) 00:07:40.674 11443.594 - 11494.006: 81.5360% ( 10) 00:07:40.674 11494.006 - 11544.418: 81.8359% ( 43) 00:07:40.674 11544.418 - 11594.831: 81.8917% ( 8) 00:07:40.674 11594.831 - 11645.243: 81.9406% ( 7) 00:07:40.674 11645.243 - 11695.655: 81.9894% ( 7) 00:07:40.674 11695.655 - 11746.068: 82.0173% ( 4) 00:07:40.674 11746.068 - 11796.480: 82.0452% ( 4) 00:07:40.674 11796.480 - 11846.892: 82.0801% ( 5) 00:07:40.674 11846.892 - 11897.305: 82.1429% ( 9) 00:07:40.674 11897.305 - 11947.717: 82.1777% ( 5) 00:07:40.674 11947.717 - 11998.129: 82.2475% ( 10) 00:07:40.674 11998.129 - 12048.542: 82.3242% ( 11) 00:07:40.674 12048.542 - 12098.954: 82.3870% ( 9) 00:07:40.674 12098.954 - 12149.366: 82.4568% ( 10) 00:07:40.675 12149.366 - 12199.778: 82.4707% ( 2) 00:07:40.675 12199.778 - 12250.191: 82.4777% ( 1) 00:07:40.675 12250.191 - 12300.603: 82.4847% ( 1) 00:07:40.675 12300.603 - 12351.015: 82.4986% ( 2) 00:07:40.675 12351.015 - 12401.428: 82.5265% ( 4) 00:07:40.675 12401.428 - 12451.840: 82.5684% ( 6) 00:07:40.675 12451.840 - 12502.252: 82.6172% ( 7) 00:07:40.675 12502.252 - 12552.665: 82.6521% ( 5) 00:07:40.675 12552.665 - 12603.077: 82.7009% ( 7) 00:07:40.675 12603.077 - 12653.489: 82.7427% ( 6) 00:07:40.675 12653.489 - 12703.902: 82.7776% ( 5) 00:07:40.675 12703.902 - 12754.314: 82.8265% ( 7) 00:07:40.675 12754.314 - 12804.726: 82.8823% ( 8) 00:07:40.675 12804.726 - 12855.138: 82.9590% ( 11) 00:07:40.675 12855.138 - 12905.551: 83.0357% ( 11) 00:07:40.675 12905.551 - 13006.375: 83.1334% ( 14) 00:07:40.675 13006.375 - 13107.200: 83.2101% ( 11) 00:07:40.675 13107.200 - 13208.025: 83.2729% ( 9) 00:07:40.675 13208.025 - 13308.849: 83.3775% ( 15) 00:07:40.675 13308.849 - 13409.674: 83.5100% ( 19) 00:07:40.675 13409.674 - 13510.498: 83.7123% ( 29) 00:07:40.675 13510.498 - 13611.323: 83.7681% ( 8) 00:07:40.675 13611.323 - 13712.148: 83.7891% ( 3) 00:07:40.675 13712.148 - 13812.972: 83.8170% ( 4) 00:07:40.675 13812.972 - 13913.797: 83.8449% ( 4) 00:07:40.675 13913.797 - 14014.622: 83.8658% ( 3) 00:07:40.675 14014.622 - 14115.446: 83.8937% ( 4) 00:07:40.675 14115.446 - 14216.271: 83.9216% ( 4) 00:07:40.675 14216.271 - 14317.095: 83.9286% ( 1) 00:07:40.675 14922.043 - 15022.868: 83.9495% ( 3) 00:07:40.675 15022.868 - 15123.692: 83.9704% ( 3) 00:07:40.675 15123.692 - 15224.517: 84.0053% ( 5) 00:07:40.675 15224.517 - 15325.342: 84.0332% ( 4) 00:07:40.675 15325.342 - 15426.166: 84.0681% ( 5) 00:07:40.675 15426.166 - 15526.991: 84.1169% ( 7) 00:07:40.675 15526.991 - 15627.815: 84.1797% ( 9) 00:07:40.675 15627.815 - 15728.640: 84.3052% ( 18) 00:07:40.675 15728.640 - 15829.465: 84.5285% ( 32) 00:07:40.675 15829.465 - 15930.289: 84.6470% ( 17) 00:07:40.675 15930.289 - 16031.114: 84.7447% ( 14) 00:07:40.675 16031.114 - 16131.938: 84.9191% ( 25) 00:07:40.675 16131.938 - 16232.763: 85.0935% ( 25) 00:07:40.675 16232.763 - 16333.588: 85.1911% ( 14) 00:07:40.675 16333.588 - 16434.412: 85.2539% ( 9) 00:07:40.675 16434.412 - 16535.237: 85.3237% ( 10) 00:07:40.675 16535.237 - 16636.062: 85.3934% ( 10) 00:07:40.675 16636.062 - 16736.886: 85.4422% ( 7) 00:07:40.675 16736.886 - 16837.711: 85.5120% ( 10) 00:07:40.675 16837.711 - 16938.535: 85.5539% ( 6) 00:07:40.675 16938.535 - 17039.360: 85.6236% ( 10) 00:07:40.675 17039.360 - 17140.185: 85.6864% ( 9) 00:07:40.675 17140.185 - 17241.009: 85.7561% ( 10) 00:07:40.675 17241.009 - 17341.834: 85.8189% ( 9) 00:07:40.675 17341.834 - 17442.658: 86.0212% ( 29) 00:07:40.675 17442.658 - 17543.483: 86.5723% ( 79) 00:07:40.675 17543.483 - 17644.308: 87.1861% ( 88) 00:07:40.675 17644.308 - 17745.132: 87.8488% ( 95) 00:07:40.675 17745.132 - 17845.957: 88.5951% ( 107) 00:07:40.675 17845.957 - 17946.782: 89.2648% ( 96) 00:07:40.675 17946.782 - 18047.606: 90.4297% ( 167) 00:07:40.675 18047.606 - 18148.431: 95.2706% ( 694) 00:07:40.675 18148.431 - 18249.255: 96.5123% ( 178) 00:07:40.675 18249.255 - 18350.080: 97.0075% ( 71) 00:07:40.675 18350.080 - 18450.905: 97.4121% ( 58) 00:07:40.675 18450.905 - 18551.729: 97.8795% ( 67) 00:07:40.675 18551.729 - 18652.554: 98.6468% ( 110) 00:07:40.675 18652.554 - 18753.378: 99.0653% ( 60) 00:07:40.675 18753.378 - 18854.203: 99.4768% ( 59) 00:07:40.675 18854.203 - 18955.028: 99.7349% ( 37) 00:07:40.675 18955.028 - 19055.852: 99.8605% ( 18) 00:07:40.675 19055.852 - 19156.677: 99.9093% ( 7) 00:07:40.675 19156.677 - 19257.502: 99.9372% ( 4) 00:07:40.675 19257.502 - 19358.326: 99.9651% ( 4) 00:07:40.675 19358.326 - 19459.151: 99.9860% ( 3) 00:07:40.675 19459.151 - 19559.975: 100.0000% ( 2) 00:07:40.675 00:07:40.675 11:50:38 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:40.675 00:07:40.675 real 0m2.489s 00:07:40.675 user 0m2.196s 00:07:40.675 sys 0m0.202s 00:07:40.675 11:50:38 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.675 11:50:38 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:40.675 ************************************ 00:07:40.675 END TEST nvme_perf 00:07:40.675 ************************************ 00:07:40.934 11:50:38 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:40.934 11:50:38 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:40.934 11:50:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.934 11:50:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.934 ************************************ 00:07:40.934 START TEST nvme_hello_world 00:07:40.934 ************************************ 00:07:40.934 11:50:38 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:40.934 Initializing NVMe Controllers 00:07:40.934 Attached to 0000:00:11.0 00:07:40.934 Namespace ID: 1 size: 5GB 00:07:40.934 Attached to 0000:00:13.0 00:07:40.934 Namespace ID: 1 size: 1GB 00:07:40.934 Attached to 0000:00:10.0 00:07:40.934 Namespace ID: 1 size: 6GB 00:07:40.934 Attached to 0000:00:12.0 00:07:40.934 Namespace ID: 1 size: 4GB 00:07:40.934 Namespace ID: 2 size: 4GB 00:07:40.934 Namespace ID: 3 size: 4GB 00:07:40.934 Initialization complete. 00:07:40.934 INFO: using host memory buffer for IO 00:07:40.934 Hello world! 00:07:40.934 INFO: using host memory buffer for IO 00:07:40.934 Hello world! 00:07:40.934 INFO: using host memory buffer for IO 00:07:40.934 Hello world! 00:07:40.934 INFO: using host memory buffer for IO 00:07:40.934 Hello world! 00:07:40.934 INFO: using host memory buffer for IO 00:07:40.934 Hello world! 00:07:40.934 INFO: using host memory buffer for IO 00:07:40.934 Hello world! 00:07:40.934 00:07:40.934 real 0m0.230s 00:07:40.934 user 0m0.083s 00:07:40.934 sys 0m0.102s 00:07:40.934 ************************************ 00:07:40.934 END TEST nvme_hello_world 00:07:40.934 ************************************ 00:07:40.934 11:50:38 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.934 11:50:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.192 11:50:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:41.192 11:50:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:41.192 11:50:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.192 11:50:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.192 ************************************ 00:07:41.192 START TEST nvme_sgl 00:07:41.192 ************************************ 00:07:41.192 11:50:38 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:41.192 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:41.192 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:41.192 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:41.192 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:41.192 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:41.192 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:41.192 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:41.192 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:41.192 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:41.192 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:41.451 NVMe Readv/Writev Request test 00:07:41.451 Attached to 0000:00:11.0 00:07:41.451 Attached to 0000:00:13.0 00:07:41.451 Attached to 0000:00:10.0 00:07:41.451 Attached to 0000:00:12.0 00:07:41.451 0000:00:11.0: build_io_request_2 test passed 00:07:41.451 0000:00:11.0: build_io_request_4 test passed 00:07:41.451 0000:00:11.0: build_io_request_5 test passed 00:07:41.451 0000:00:11.0: build_io_request_6 test passed 00:07:41.451 0000:00:11.0: build_io_request_7 test passed 00:07:41.451 0000:00:11.0: build_io_request_10 test passed 00:07:41.451 0000:00:10.0: build_io_request_2 test passed 00:07:41.451 0000:00:10.0: build_io_request_4 test passed 00:07:41.451 0000:00:10.0: build_io_request_5 test passed 00:07:41.451 0000:00:10.0: build_io_request_6 test passed 00:07:41.451 0000:00:10.0: build_io_request_7 test passed 00:07:41.451 0000:00:10.0: build_io_request_10 test passed 00:07:41.451 Cleaning up... 00:07:41.451 ************************************ 00:07:41.451 END TEST nvme_sgl 00:07:41.451 ************************************ 00:07:41.451 00:07:41.451 real 0m0.281s 00:07:41.451 user 0m0.143s 00:07:41.451 sys 0m0.100s 00:07:41.451 11:50:38 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.451 11:50:38 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:41.451 11:50:38 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:41.451 11:50:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:41.451 11:50:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.451 11:50:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.451 ************************************ 00:07:41.451 START TEST nvme_e2edp 00:07:41.451 ************************************ 00:07:41.451 11:50:38 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:41.709 NVMe Write/Read with End-to-End data protection test 00:07:41.709 Attached to 0000:00:11.0 00:07:41.709 Attached to 0000:00:13.0 00:07:41.709 Attached to 0000:00:10.0 00:07:41.709 Attached to 0000:00:12.0 00:07:41.709 Cleaning up... 00:07:41.709 00:07:41.709 real 0m0.212s 00:07:41.709 user 0m0.068s 00:07:41.709 sys 0m0.100s 00:07:41.709 11:50:39 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.709 ************************************ 00:07:41.709 END TEST nvme_e2edp 00:07:41.709 ************************************ 00:07:41.709 11:50:39 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:41.709 11:50:39 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:41.709 11:50:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:41.709 11:50:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.709 11:50:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.709 ************************************ 00:07:41.709 START TEST nvme_reserve 00:07:41.709 ************************************ 00:07:41.709 11:50:39 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:41.966 ===================================================== 00:07:41.966 NVMe Controller at PCI bus 0, device 17, function 0 00:07:41.966 ===================================================== 00:07:41.966 Reservations: Not Supported 00:07:41.966 ===================================================== 00:07:41.966 NVMe Controller at PCI bus 0, device 19, function 0 00:07:41.966 ===================================================== 00:07:41.966 Reservations: Not Supported 00:07:41.966 ===================================================== 00:07:41.966 NVMe Controller at PCI bus 0, device 16, function 0 00:07:41.966 ===================================================== 00:07:41.966 Reservations: Not Supported 00:07:41.966 ===================================================== 00:07:41.966 NVMe Controller at PCI bus 0, device 18, function 0 00:07:41.966 ===================================================== 00:07:41.966 Reservations: Not Supported 00:07:41.966 Reservation test passed 00:07:41.966 00:07:41.966 real 0m0.203s 00:07:41.966 user 0m0.074s 00:07:41.966 sys 0m0.085s 00:07:41.966 11:50:39 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.966 11:50:39 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:41.966 ************************************ 00:07:41.966 END TEST nvme_reserve 00:07:41.966 ************************************ 00:07:41.966 11:50:39 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:41.966 11:50:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:41.966 11:50:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.966 11:50:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.966 ************************************ 00:07:41.966 START TEST nvme_err_injection 00:07:41.966 ************************************ 00:07:41.966 11:50:39 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:42.224 NVMe Error Injection test 00:07:42.224 Attached to 0000:00:11.0 00:07:42.224 Attached to 0000:00:13.0 00:07:42.224 Attached to 0000:00:10.0 00:07:42.224 Attached to 0000:00:12.0 00:07:42.224 0000:00:10.0: get features failed as expected 00:07:42.224 0000:00:12.0: get features failed as expected 00:07:42.224 0000:00:11.0: get features failed as expected 00:07:42.224 0000:00:13.0: get features failed as expected 00:07:42.224 0000:00:11.0: get features successfully as expected 00:07:42.224 0000:00:13.0: get features successfully as expected 00:07:42.224 0000:00:10.0: get features successfully as expected 00:07:42.224 0000:00:12.0: get features successfully as expected 00:07:42.224 0000:00:11.0: read failed as expected 00:07:42.224 0000:00:13.0: read failed as expected 00:07:42.224 0000:00:10.0: read failed as expected 00:07:42.224 0000:00:12.0: read failed as expected 00:07:42.224 0000:00:11.0: read successfully as expected 00:07:42.224 0000:00:13.0: read successfully as expected 00:07:42.224 0000:00:10.0: read successfully as expected 00:07:42.224 0000:00:12.0: read successfully as expected 00:07:42.224 Cleaning up... 00:07:42.224 00:07:42.224 real 0m0.227s 00:07:42.224 user 0m0.084s 00:07:42.224 sys 0m0.099s 00:07:42.224 ************************************ 00:07:42.224 END TEST nvme_err_injection 00:07:42.224 ************************************ 00:07:42.224 11:50:39 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.224 11:50:39 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:42.224 11:50:39 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:42.224 11:50:39 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:07:42.224 11:50:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.224 11:50:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.224 ************************************ 00:07:42.224 START TEST nvme_overhead 00:07:42.224 ************************************ 00:07:42.224 11:50:39 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:43.599 Initializing NVMe Controllers 00:07:43.599 Attached to 0000:00:11.0 00:07:43.599 Attached to 0000:00:13.0 00:07:43.599 Attached to 0000:00:10.0 00:07:43.599 Attached to 0000:00:12.0 00:07:43.599 Initialization complete. Launching workers. 00:07:43.599 submit (in ns) avg, min, max = 11326.6, 9743.1, 51616.9 00:07:43.599 complete (in ns) avg, min, max = 7552.5, 7199.2, 44760.0 00:07:43.599 00:07:43.599 Submit histogram 00:07:43.599 ================ 00:07:43.599 Range in us Cumulative Count 00:07:43.599 9.698 - 9.748: 0.0055% ( 1) 00:07:43.599 10.142 - 10.191: 0.0110% ( 1) 00:07:43.599 10.338 - 10.388: 0.0165% ( 1) 00:07:43.599 10.535 - 10.585: 0.0220% ( 1) 00:07:43.599 10.831 - 10.880: 0.1594% ( 25) 00:07:43.599 10.880 - 10.929: 1.5829% ( 259) 00:07:43.599 10.929 - 10.978: 7.3042% ( 1041) 00:07:43.599 10.978 - 11.028: 20.1869% ( 2344) 00:07:43.599 11.028 - 11.077: 36.8838% ( 3038) 00:07:43.599 11.077 - 11.126: 53.6906% ( 3058) 00:07:43.599 11.126 - 11.175: 66.9195% ( 2407) 00:07:43.599 11.175 - 11.225: 75.5153% ( 1564) 00:07:43.599 11.225 - 11.274: 81.0058% ( 999) 00:07:43.599 11.274 - 11.323: 84.5287% ( 641) 00:07:43.599 11.323 - 11.372: 86.9360% ( 438) 00:07:43.599 11.372 - 11.422: 88.6452% ( 311) 00:07:43.599 11.422 - 11.471: 89.7170% ( 195) 00:07:43.599 11.471 - 11.520: 90.5359% ( 149) 00:07:43.599 11.520 - 11.569: 91.1899% ( 119) 00:07:43.599 11.569 - 11.618: 91.8219% ( 115) 00:07:43.599 11.618 - 11.668: 92.3715% ( 100) 00:07:43.599 11.668 - 11.717: 93.0201% ( 118) 00:07:43.599 11.717 - 11.766: 93.5257% ( 92) 00:07:43.599 11.766 - 11.815: 93.8664% ( 62) 00:07:43.599 11.815 - 11.865: 94.3226% ( 83) 00:07:43.599 11.865 - 11.914: 94.6579% ( 61) 00:07:43.599 11.914 - 11.963: 94.9766% ( 58) 00:07:43.599 11.963 - 12.012: 95.3174% ( 62) 00:07:43.599 12.012 - 12.062: 95.5482% ( 42) 00:07:43.599 12.062 - 12.111: 95.6966% ( 27) 00:07:43.599 12.111 - 12.160: 95.8945% ( 36) 00:07:43.599 12.160 - 12.209: 96.0484% ( 28) 00:07:43.599 12.209 - 12.258: 96.2682% ( 40) 00:07:43.599 12.258 - 12.308: 96.4056% ( 25) 00:07:43.599 12.308 - 12.357: 96.5485% ( 26) 00:07:43.599 12.357 - 12.406: 96.6749% ( 23) 00:07:43.599 12.406 - 12.455: 96.7574% ( 15) 00:07:43.599 12.455 - 12.505: 96.8233% ( 12) 00:07:43.599 12.505 - 12.554: 96.8618% ( 7) 00:07:43.599 12.554 - 12.603: 96.9167% ( 10) 00:07:43.599 12.603 - 12.702: 96.9662% ( 9) 00:07:43.599 12.702 - 12.800: 97.0047% ( 7) 00:07:43.599 12.800 - 12.898: 97.0376% ( 6) 00:07:43.599 12.898 - 12.997: 97.0541% ( 3) 00:07:43.599 12.997 - 13.095: 97.1476% ( 17) 00:07:43.599 13.095 - 13.194: 97.2190% ( 13) 00:07:43.599 13.194 - 13.292: 97.3234% ( 19) 00:07:43.599 13.292 - 13.391: 97.4883% ( 30) 00:07:43.599 13.391 - 13.489: 97.5818% ( 17) 00:07:43.599 13.489 - 13.588: 97.6532% ( 13) 00:07:43.599 13.588 - 13.686: 97.7356% ( 15) 00:07:43.599 13.686 - 13.785: 97.8016% ( 12) 00:07:43.599 13.785 - 13.883: 97.8236% ( 4) 00:07:43.599 13.883 - 13.982: 97.8401% ( 3) 00:07:43.599 13.982 - 14.080: 97.8950% ( 10) 00:07:43.599 14.080 - 14.178: 97.9335% ( 7) 00:07:43.599 14.178 - 14.277: 97.9775% ( 8) 00:07:43.599 14.277 - 14.375: 97.9995% ( 4) 00:07:43.599 14.375 - 14.474: 98.0159% ( 3) 00:07:43.599 14.474 - 14.572: 98.0434% ( 5) 00:07:43.599 14.572 - 14.671: 98.0544% ( 2) 00:07:43.599 14.671 - 14.769: 98.0819% ( 5) 00:07:43.599 14.769 - 14.868: 98.1094% ( 5) 00:07:43.599 14.868 - 14.966: 98.1478% ( 7) 00:07:43.599 14.966 - 15.065: 98.1808% ( 6) 00:07:43.599 15.065 - 15.163: 98.2193% ( 7) 00:07:43.599 15.163 - 15.262: 98.2358% ( 3) 00:07:43.599 15.262 - 15.360: 98.2578% ( 4) 00:07:43.599 15.360 - 15.458: 98.2743% ( 3) 00:07:43.599 15.458 - 15.557: 98.3017% ( 5) 00:07:43.599 15.557 - 15.655: 98.3292% ( 5) 00:07:43.599 15.655 - 15.754: 98.3457% ( 3) 00:07:43.599 15.754 - 15.852: 98.3567% ( 2) 00:07:43.599 15.951 - 16.049: 98.3787% ( 4) 00:07:43.599 16.049 - 16.148: 98.3952% ( 3) 00:07:43.599 16.148 - 16.246: 98.4171% ( 4) 00:07:43.599 16.246 - 16.345: 98.4336% ( 3) 00:07:43.599 16.345 - 16.443: 98.4446% ( 2) 00:07:43.599 16.443 - 16.542: 98.4996% ( 10) 00:07:43.599 16.542 - 16.640: 98.5875% ( 16) 00:07:43.599 16.640 - 16.738: 98.7469% ( 29) 00:07:43.599 16.738 - 16.837: 98.8568% ( 20) 00:07:43.599 16.837 - 16.935: 98.9063% ( 9) 00:07:43.599 16.935 - 17.034: 98.9942% ( 16) 00:07:43.599 17.034 - 17.132: 99.0987% ( 19) 00:07:43.599 17.132 - 17.231: 99.1591% ( 11) 00:07:43.599 17.231 - 17.329: 99.2635% ( 19) 00:07:43.599 17.329 - 17.428: 99.3295% ( 12) 00:07:43.599 17.428 - 17.526: 99.3735% ( 8) 00:07:43.599 17.526 - 17.625: 99.4339% ( 11) 00:07:43.599 17.625 - 17.723: 99.4614% ( 5) 00:07:43.599 17.723 - 17.822: 99.4944% ( 6) 00:07:43.599 17.822 - 17.920: 99.5273% ( 6) 00:07:43.599 17.920 - 18.018: 99.5658% ( 7) 00:07:43.599 18.018 - 18.117: 99.5933% ( 5) 00:07:43.599 18.117 - 18.215: 99.6263% ( 6) 00:07:43.599 18.215 - 18.314: 99.6483% ( 4) 00:07:43.599 18.314 - 18.412: 99.6592% ( 2) 00:07:43.599 18.412 - 18.511: 99.6702% ( 2) 00:07:43.599 18.609 - 18.708: 99.6867% ( 3) 00:07:43.599 18.708 - 18.806: 99.6977% ( 2) 00:07:43.599 18.806 - 18.905: 99.7087% ( 2) 00:07:43.600 18.905 - 19.003: 99.7142% ( 1) 00:07:43.600 19.003 - 19.102: 99.7252% ( 2) 00:07:43.600 19.102 - 19.200: 99.7307% ( 1) 00:07:43.600 19.200 - 19.298: 99.7417% ( 2) 00:07:43.600 19.298 - 19.397: 99.7472% ( 1) 00:07:43.600 19.495 - 19.594: 99.7637% ( 3) 00:07:43.600 19.692 - 19.791: 99.7692% ( 1) 00:07:43.600 19.791 - 19.889: 99.7747% ( 1) 00:07:43.600 19.889 - 19.988: 99.7802% ( 1) 00:07:43.600 19.988 - 20.086: 99.7857% ( 1) 00:07:43.600 20.086 - 20.185: 99.7912% ( 1) 00:07:43.600 20.185 - 20.283: 99.7966% ( 1) 00:07:43.600 20.382 - 20.480: 99.8021% ( 1) 00:07:43.600 20.480 - 20.578: 99.8131% ( 2) 00:07:43.600 20.578 - 20.677: 99.8296% ( 3) 00:07:43.600 20.677 - 20.775: 99.8461% ( 3) 00:07:43.600 21.071 - 21.169: 99.8516% ( 1) 00:07:43.600 21.169 - 21.268: 99.8571% ( 1) 00:07:43.600 21.268 - 21.366: 99.8681% ( 2) 00:07:43.600 21.465 - 21.563: 99.8791% ( 2) 00:07:43.600 21.957 - 22.055: 99.8846% ( 1) 00:07:43.600 22.055 - 22.154: 99.8901% ( 1) 00:07:43.600 22.252 - 22.351: 99.8956% ( 1) 00:07:43.600 22.351 - 22.449: 99.9066% ( 2) 00:07:43.600 23.040 - 23.138: 99.9121% ( 1) 00:07:43.600 23.434 - 23.532: 99.9176% ( 1) 00:07:43.600 23.828 - 23.926: 99.9231% ( 1) 00:07:43.600 24.123 - 24.222: 99.9286% ( 1) 00:07:43.600 24.615 - 24.714: 99.9340% ( 1) 00:07:43.600 24.812 - 24.911: 99.9395% ( 1) 00:07:43.600 25.108 - 25.206: 99.9450% ( 1) 00:07:43.600 25.206 - 25.403: 99.9505% ( 1) 00:07:43.600 29.932 - 30.129: 99.9560% ( 1) 00:07:43.600 38.203 - 38.400: 99.9615% ( 1) 00:07:43.600 42.535 - 42.732: 99.9670% ( 1) 00:07:43.600 44.111 - 44.308: 99.9725% ( 1) 00:07:43.600 45.686 - 45.883: 99.9780% ( 1) 00:07:43.600 45.883 - 46.080: 99.9835% ( 1) 00:07:43.600 46.474 - 46.671: 99.9890% ( 1) 00:07:43.600 49.822 - 50.018: 99.9945% ( 1) 00:07:43.600 51.594 - 51.988: 100.0000% ( 1) 00:07:43.600 00:07:43.600 Complete histogram 00:07:43.600 ================== 00:07:43.600 Range in us Cumulative Count 00:07:43.600 7.188 - 7.237: 0.0879% ( 16) 00:07:43.600 7.237 - 7.286: 1.8192% ( 315) 00:07:43.600 7.286 - 7.335: 12.6189% ( 1965) 00:07:43.600 7.335 - 7.385: 38.6040% ( 4728) 00:07:43.600 7.385 - 7.434: 64.5067% ( 4713) 00:07:43.600 7.434 - 7.483: 81.0442% ( 3009) 00:07:43.600 7.483 - 7.532: 89.4531% ( 1530) 00:07:43.600 7.532 - 7.582: 93.2014% ( 682) 00:07:43.600 7.582 - 7.631: 95.3229% ( 386) 00:07:43.600 7.631 - 7.680: 96.3287% ( 183) 00:07:43.600 7.680 - 7.729: 96.8398% ( 93) 00:07:43.600 7.729 - 7.778: 97.1421% ( 55) 00:07:43.600 7.778 - 7.828: 97.3674% ( 41) 00:07:43.600 7.828 - 7.877: 97.4663% ( 18) 00:07:43.600 7.877 - 7.926: 97.5543% ( 16) 00:07:43.600 7.926 - 7.975: 97.5872% ( 6) 00:07:43.600 7.975 - 8.025: 97.6092% ( 4) 00:07:43.600 8.025 - 8.074: 97.6972% ( 16) 00:07:43.600 8.074 - 8.123: 97.7192% ( 4) 00:07:43.600 8.123 - 8.172: 97.7631% ( 8) 00:07:43.600 8.172 - 8.222: 97.7961% ( 6) 00:07:43.600 8.222 - 8.271: 97.8511% ( 10) 00:07:43.600 8.271 - 8.320: 97.9280% ( 14) 00:07:43.600 8.320 - 8.369: 97.9830% ( 10) 00:07:43.600 8.369 - 8.418: 98.0159% ( 6) 00:07:43.600 8.418 - 8.468: 98.0269% ( 2) 00:07:43.600 8.468 - 8.517: 98.0324% ( 1) 00:07:43.600 8.566 - 8.615: 98.0434% ( 2) 00:07:43.600 8.615 - 8.665: 98.0544% ( 2) 00:07:43.600 8.665 - 8.714: 98.0599% ( 1) 00:07:43.600 8.714 - 8.763: 98.0654% ( 1) 00:07:43.600 8.862 - 8.911: 98.0764% ( 2) 00:07:43.600 8.911 - 8.960: 98.0929% ( 3) 00:07:43.600 8.960 - 9.009: 98.1039% ( 2) 00:07:43.600 9.157 - 9.206: 98.1094% ( 1) 00:07:43.600 9.206 - 9.255: 98.1149% ( 1) 00:07:43.600 9.452 - 9.502: 98.1259% ( 2) 00:07:43.600 9.551 - 9.600: 98.1314% ( 1) 00:07:43.600 9.600 - 9.649: 98.1369% ( 1) 00:07:43.600 9.649 - 9.698: 98.1478% ( 2) 00:07:43.600 9.797 - 9.846: 98.1533% ( 1) 00:07:43.600 9.895 - 9.945: 98.1588% ( 1) 00:07:43.600 9.994 - 10.043: 98.1698% ( 2) 00:07:43.600 10.043 - 10.092: 98.1753% ( 1) 00:07:43.600 10.092 - 10.142: 98.1808% ( 1) 00:07:43.600 10.191 - 10.240: 98.1863% ( 1) 00:07:43.600 10.240 - 10.289: 98.1918% ( 1) 00:07:43.600 10.338 - 10.388: 98.2028% ( 2) 00:07:43.600 10.388 - 10.437: 98.2138% ( 2) 00:07:43.600 10.486 - 10.535: 98.2193% ( 1) 00:07:43.600 10.683 - 10.732: 98.2303% ( 2) 00:07:43.600 10.782 - 10.831: 98.2358% ( 1) 00:07:43.600 10.831 - 10.880: 98.2413% ( 1) 00:07:43.600 10.880 - 10.929: 98.2523% ( 2) 00:07:43.600 11.077 - 11.126: 98.2578% ( 1) 00:07:43.600 11.126 - 11.175: 98.2633% ( 1) 00:07:43.600 11.175 - 11.225: 98.2688% ( 1) 00:07:43.600 11.618 - 11.668: 98.2743% ( 1) 00:07:43.600 11.668 - 11.717: 98.2797% ( 1) 00:07:43.600 11.717 - 11.766: 98.2852% ( 1) 00:07:43.600 11.766 - 11.815: 98.2962% ( 2) 00:07:43.600 11.914 - 11.963: 98.3072% ( 2) 00:07:43.600 11.963 - 12.012: 98.3237% ( 3) 00:07:43.600 12.160 - 12.209: 98.3347% ( 2) 00:07:43.600 12.209 - 12.258: 98.3402% ( 1) 00:07:43.600 12.258 - 12.308: 98.3622% ( 4) 00:07:43.600 12.308 - 12.357: 98.3677% ( 1) 00:07:43.600 12.455 - 12.505: 98.3732% ( 1) 00:07:43.600 12.505 - 12.554: 98.3842% ( 2) 00:07:43.600 12.554 - 12.603: 98.3897% ( 1) 00:07:43.600 12.702 - 12.800: 98.4117% ( 4) 00:07:43.600 12.800 - 12.898: 98.4996% ( 16) 00:07:43.600 12.898 - 12.997: 98.5985% ( 18) 00:07:43.600 12.997 - 13.095: 98.6974% ( 18) 00:07:43.600 13.095 - 13.194: 98.7414% ( 8) 00:07:43.600 13.194 - 13.292: 98.8239% ( 15) 00:07:43.600 13.292 - 13.391: 98.9008% ( 14) 00:07:43.600 13.391 - 13.489: 98.9613% ( 11) 00:07:43.600 13.489 - 13.588: 99.0382% ( 14) 00:07:43.600 13.588 - 13.686: 99.1316% ( 17) 00:07:43.600 13.686 - 13.785: 99.2086% ( 14) 00:07:43.600 13.785 - 13.883: 99.3075% ( 18) 00:07:43.600 13.883 - 13.982: 99.3899% ( 15) 00:07:43.600 13.982 - 14.080: 99.4614% ( 13) 00:07:43.600 14.080 - 14.178: 99.4999% ( 7) 00:07:43.600 14.178 - 14.277: 99.5603% ( 11) 00:07:43.600 14.277 - 14.375: 99.5823% ( 4) 00:07:43.600 14.375 - 14.474: 99.6043% ( 4) 00:07:43.600 14.474 - 14.572: 99.6153% ( 2) 00:07:43.600 14.572 - 14.671: 99.6373% ( 4) 00:07:43.600 14.671 - 14.769: 99.6483% ( 2) 00:07:43.600 14.769 - 14.868: 99.6538% ( 1) 00:07:43.600 14.868 - 14.966: 99.6592% ( 1) 00:07:43.600 14.966 - 15.065: 99.6702% ( 2) 00:07:43.600 15.065 - 15.163: 99.6812% ( 2) 00:07:43.600 15.163 - 15.262: 99.6922% ( 2) 00:07:43.600 15.262 - 15.360: 99.6977% ( 1) 00:07:43.600 15.360 - 15.458: 99.7087% ( 2) 00:07:43.600 15.458 - 15.557: 99.7197% ( 2) 00:07:43.600 15.655 - 15.754: 99.7307% ( 2) 00:07:43.600 15.754 - 15.852: 99.7472% ( 3) 00:07:43.600 15.852 - 15.951: 99.7582% ( 2) 00:07:43.600 16.049 - 16.148: 99.7637% ( 1) 00:07:43.600 16.246 - 16.345: 99.7747% ( 2) 00:07:43.600 16.345 - 16.443: 99.7802% ( 1) 00:07:43.600 16.443 - 16.542: 99.7857% ( 1) 00:07:43.600 16.542 - 16.640: 99.7966% ( 2) 00:07:43.600 16.738 - 16.837: 99.8021% ( 1) 00:07:43.600 16.935 - 17.034: 99.8131% ( 2) 00:07:43.600 17.034 - 17.132: 99.8186% ( 1) 00:07:43.600 17.132 - 17.231: 99.8241% ( 1) 00:07:43.600 17.231 - 17.329: 99.8296% ( 1) 00:07:43.600 17.428 - 17.526: 99.8461% ( 3) 00:07:43.600 17.526 - 17.625: 99.8516% ( 1) 00:07:43.600 17.625 - 17.723: 99.8571% ( 1) 00:07:43.600 17.920 - 18.018: 99.8626% ( 1) 00:07:43.600 18.117 - 18.215: 99.8681% ( 1) 00:07:43.600 18.215 - 18.314: 99.8736% ( 1) 00:07:43.600 18.609 - 18.708: 99.8791% ( 1) 00:07:43.600 18.708 - 18.806: 99.8846% ( 1) 00:07:43.600 19.003 - 19.102: 99.8901% ( 1) 00:07:43.600 19.692 - 19.791: 99.8956% ( 1) 00:07:43.600 20.185 - 20.283: 99.9011% ( 1) 00:07:43.600 20.283 - 20.382: 99.9066% ( 1) 00:07:43.601 20.480 - 20.578: 99.9121% ( 1) 00:07:43.601 20.578 - 20.677: 99.9176% ( 1) 00:07:43.601 20.775 - 20.874: 99.9231% ( 1) 00:07:43.601 20.972 - 21.071: 99.9286% ( 1) 00:07:43.601 21.760 - 21.858: 99.9340% ( 1) 00:07:43.601 22.449 - 22.548: 99.9395% ( 1) 00:07:43.601 24.418 - 24.517: 99.9450% ( 1) 00:07:43.601 26.782 - 26.978: 99.9505% ( 1) 00:07:43.601 28.751 - 28.948: 99.9560% ( 1) 00:07:43.601 30.917 - 31.114: 99.9615% ( 1) 00:07:43.601 31.311 - 31.508: 99.9670% ( 1) 00:07:43.601 31.508 - 31.705: 99.9725% ( 1) 00:07:43.601 35.052 - 35.249: 99.9780% ( 1) 00:07:43.601 35.249 - 35.446: 99.9835% ( 1) 00:07:43.601 36.628 - 36.825: 99.9890% ( 1) 00:07:43.601 42.732 - 42.929: 99.9945% ( 1) 00:07:43.601 44.702 - 44.898: 100.0000% ( 1) 00:07:43.601 00:07:43.601 ************************************ 00:07:43.601 END TEST nvme_overhead 00:07:43.601 ************************************ 00:07:43.601 00:07:43.601 real 0m1.230s 00:07:43.601 user 0m1.080s 00:07:43.601 sys 0m0.098s 00:07:43.601 11:50:40 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.601 11:50:40 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:43.601 11:50:40 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:43.601 11:50:40 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:43.601 11:50:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.601 11:50:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.601 ************************************ 00:07:43.601 START TEST nvme_arbitration 00:07:43.601 ************************************ 00:07:43.601 11:50:40 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:46.955 Initializing NVMe Controllers 00:07:46.955 Attached to 0000:00:11.0 00:07:46.955 Attached to 0000:00:13.0 00:07:46.955 Attached to 0000:00:10.0 00:07:46.955 Attached to 0000:00:12.0 00:07:46.955 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:07:46.955 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:07:46.955 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:07:46.955 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:46.955 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:46.955 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:46.955 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:46.955 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:46.955 Initialization complete. Launching workers. 00:07:46.955 Starting thread on core 1 with urgent priority queue 00:07:46.955 Starting thread on core 2 with urgent priority queue 00:07:46.955 Starting thread on core 3 with urgent priority queue 00:07:46.955 Starting thread on core 0 with urgent priority queue 00:07:46.955 QEMU NVMe Ctrl (12341 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:07:46.955 QEMU NVMe Ctrl (12342 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:07:46.955 QEMU NVMe Ctrl (12343 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:46.955 QEMU NVMe Ctrl (12342 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:46.955 QEMU NVMe Ctrl (12340 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:07:46.955 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:46.955 ======================================================== 00:07:46.955 00:07:46.955 00:07:46.955 real 0m3.325s 00:07:46.955 user 0m9.295s 00:07:46.955 sys 0m0.113s 00:07:46.955 11:50:44 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.955 ************************************ 00:07:46.955 END TEST nvme_arbitration 00:07:46.955 ************************************ 00:07:46.955 11:50:44 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:46.955 11:50:44 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:46.955 11:50:44 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:46.955 11:50:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.955 11:50:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.955 ************************************ 00:07:46.955 START TEST nvme_single_aen 00:07:46.955 ************************************ 00:07:46.955 11:50:44 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:46.955 Asynchronous Event Request test 00:07:46.955 Attached to 0000:00:11.0 00:07:46.955 Attached to 0000:00:13.0 00:07:46.955 Attached to 0000:00:10.0 00:07:46.955 Attached to 0000:00:12.0 00:07:46.955 Reset controller to setup AER completions for this process 00:07:46.955 Registering asynchronous event callbacks... 00:07:46.955 Getting orig temperature thresholds of all controllers 00:07:46.955 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.955 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.955 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.955 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.955 Setting all controllers temperature threshold low to trigger AER 00:07:46.955 Waiting for all controllers temperature threshold to be set lower 00:07:46.955 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.955 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:46.955 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.955 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:46.955 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.955 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:46.955 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.955 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:46.955 Waiting for all controllers to trigger AER and reset threshold 00:07:46.955 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.955 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.955 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.955 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.955 Cleaning up... 00:07:46.955 ************************************ 00:07:46.955 END TEST nvme_single_aen 00:07:46.956 ************************************ 00:07:46.956 00:07:46.956 real 0m0.221s 00:07:46.956 user 0m0.072s 00:07:46.956 sys 0m0.104s 00:07:46.956 11:50:44 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.956 11:50:44 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:47.246 11:50:44 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:47.246 11:50:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:47.246 11:50:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.246 11:50:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.246 ************************************ 00:07:47.246 START TEST nvme_doorbell_aers 00:07:47.246 ************************************ 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:47.246 11:50:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:47.246 [2024-11-18 11:50:44.931501] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:07:57.227 Executing: test_write_invalid_db 00:07:57.227 Waiting for AER completion... 00:07:57.227 Failure: test_write_invalid_db 00:07:57.227 00:07:57.227 Executing: test_invalid_db_write_overflow_sq 00:07:57.227 Waiting for AER completion... 00:07:57.227 Failure: test_invalid_db_write_overflow_sq 00:07:57.227 00:07:57.227 Executing: test_invalid_db_write_overflow_cq 00:07:57.227 Waiting for AER completion... 00:07:57.227 Failure: test_invalid_db_write_overflow_cq 00:07:57.227 00:07:57.227 11:50:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:57.227 11:50:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:57.485 [2024-11-18 11:50:54.992086] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:07.453 Executing: test_write_invalid_db 00:08:07.453 Waiting for AER completion... 00:08:07.453 Failure: test_write_invalid_db 00:08:07.453 00:08:07.453 Executing: test_invalid_db_write_overflow_sq 00:08:07.453 Waiting for AER completion... 00:08:07.453 Failure: test_invalid_db_write_overflow_sq 00:08:07.453 00:08:07.453 Executing: test_invalid_db_write_overflow_cq 00:08:07.453 Waiting for AER completion... 00:08:07.453 Failure: test_invalid_db_write_overflow_cq 00:08:07.453 00:08:07.453 11:51:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:07.453 11:51:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:07.453 [2024-11-18 11:51:04.999420] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:17.445 Executing: test_write_invalid_db 00:08:17.445 Waiting for AER completion... 00:08:17.445 Failure: test_write_invalid_db 00:08:17.445 00:08:17.445 Executing: test_invalid_db_write_overflow_sq 00:08:17.445 Waiting for AER completion... 00:08:17.445 Failure: test_invalid_db_write_overflow_sq 00:08:17.445 00:08:17.445 Executing: test_invalid_db_write_overflow_cq 00:08:17.445 Waiting for AER completion... 00:08:17.445 Failure: test_invalid_db_write_overflow_cq 00:08:17.445 00:08:17.445 11:51:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:17.445 11:51:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:17.445 [2024-11-18 11:51:15.018282] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 Executing: test_write_invalid_db 00:08:27.435 Waiting for AER completion... 00:08:27.435 Failure: test_write_invalid_db 00:08:27.435 00:08:27.435 Executing: test_invalid_db_write_overflow_sq 00:08:27.435 Waiting for AER completion... 00:08:27.435 Failure: test_invalid_db_write_overflow_sq 00:08:27.435 00:08:27.435 Executing: test_invalid_db_write_overflow_cq 00:08:27.435 Waiting for AER completion... 00:08:27.435 Failure: test_invalid_db_write_overflow_cq 00:08:27.435 00:08:27.435 00:08:27.435 real 0m40.194s 00:08:27.435 user 0m34.216s 00:08:27.435 sys 0m5.599s 00:08:27.435 11:51:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.435 11:51:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:27.435 ************************************ 00:08:27.435 END TEST nvme_doorbell_aers 00:08:27.435 ************************************ 00:08:27.435 11:51:24 nvme -- nvme/nvme.sh@97 -- # uname 00:08:27.435 11:51:24 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:27.435 11:51:24 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:27.435 11:51:24 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:27.435 11:51:24 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.435 11:51:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.435 ************************************ 00:08:27.435 START TEST nvme_multi_aen 00:08:27.435 ************************************ 00:08:27.435 11:51:24 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:27.435 [2024-11-18 11:51:25.071926] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.072107] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.072216] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.073832] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.073954] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.073966] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.074907] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.074930] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.074938] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.075959] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.076042] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 [2024-11-18 11:51:25.076130] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63142) is not found. Dropping the request. 00:08:27.435 Child process pid: 63664 00:08:27.694 [Child] Asynchronous Event Request test 00:08:27.694 [Child] Attached to 0000:00:11.0 00:08:27.694 [Child] Attached to 0000:00:13.0 00:08:27.694 [Child] Attached to 0000:00:10.0 00:08:27.694 [Child] Attached to 0000:00:12.0 00:08:27.694 [Child] Registering asynchronous event callbacks... 00:08:27.694 [Child] Getting orig temperature thresholds of all controllers 00:08:27.694 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.694 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.694 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.694 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.694 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:27.694 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.694 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.694 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.694 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.694 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.694 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.694 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.694 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.694 [Child] Cleaning up... 00:08:27.694 Asynchronous Event Request test 00:08:27.694 Attached to 0000:00:11.0 00:08:27.694 Attached to 0000:00:13.0 00:08:27.694 Attached to 0000:00:10.0 00:08:27.694 Attached to 0000:00:12.0 00:08:27.694 Reset controller to setup AER completions for this process 00:08:27.694 Registering asynchronous event callbacks... 00:08:27.694 Getting orig temperature thresholds of all controllers 00:08:27.694 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.694 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.694 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.694 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.694 Setting all controllers temperature threshold low to trigger AER 00:08:27.694 Waiting for all controllers temperature threshold to be set lower 00:08:27.694 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.694 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:27.694 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.694 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:27.694 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.694 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:27.694 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.694 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:27.694 Waiting for all controllers to trigger AER and reset threshold 00:08:27.694 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.694 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.694 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.694 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.694 Cleaning up... 00:08:27.694 00:08:27.694 real 0m0.424s 00:08:27.694 user 0m0.141s 00:08:27.694 sys 0m0.177s 00:08:27.694 11:51:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.694 11:51:25 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:27.694 ************************************ 00:08:27.694 END TEST nvme_multi_aen 00:08:27.694 ************************************ 00:08:27.694 11:51:25 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:27.694 11:51:25 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:27.694 11:51:25 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.694 11:51:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.694 ************************************ 00:08:27.694 START TEST nvme_startup 00:08:27.694 ************************************ 00:08:27.694 11:51:25 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:27.953 Initializing NVMe Controllers 00:08:27.953 Attached to 0000:00:11.0 00:08:27.954 Attached to 0000:00:13.0 00:08:27.954 Attached to 0000:00:10.0 00:08:27.954 Attached to 0000:00:12.0 00:08:27.954 Initialization complete. 00:08:27.954 Time used:132669.219 (us). 00:08:27.954 ************************************ 00:08:27.954 END TEST nvme_startup 00:08:27.954 ************************************ 00:08:27.954 00:08:27.954 real 0m0.187s 00:08:27.954 user 0m0.064s 00:08:27.954 sys 0m0.088s 00:08:27.954 11:51:25 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.954 11:51:25 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:27.954 11:51:25 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:27.954 11:51:25 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:27.954 11:51:25 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.954 11:51:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.954 ************************************ 00:08:27.954 START TEST nvme_multi_secondary 00:08:27.954 ************************************ 00:08:27.954 11:51:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:08:27.954 11:51:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63714 00:08:27.954 11:51:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63715 00:08:27.954 11:51:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:27.954 11:51:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:27.954 11:51:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:31.234 Initializing NVMe Controllers 00:08:31.234 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.234 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.234 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.234 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.234 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:31.234 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:31.234 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:31.234 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:31.234 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:31.234 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:31.234 Initialization complete. Launching workers. 00:08:31.234 ======================================================== 00:08:31.234 Latency(us) 00:08:31.234 Device Information : IOPS MiB/s Average min max 00:08:31.234 PCIE (0000:00:11.0) NSID 1 from core 2: 3326.69 12.99 4809.21 788.41 14040.53 00:08:31.234 PCIE (0000:00:13.0) NSID 1 from core 2: 3326.69 12.99 4809.31 806.61 12955.76 00:08:31.234 PCIE (0000:00:10.0) NSID 1 from core 2: 3326.69 12.99 4808.02 791.66 12898.14 00:08:31.234 PCIE (0000:00:12.0) NSID 1 from core 2: 3326.69 12.99 4809.28 775.22 12492.30 00:08:31.234 PCIE (0000:00:12.0) NSID 2 from core 2: 3326.69 12.99 4809.22 779.11 13496.75 00:08:31.234 PCIE (0000:00:12.0) NSID 3 from core 2: 3326.69 12.99 4808.84 785.69 13990.45 00:08:31.234 ======================================================== 00:08:31.234 Total : 19960.15 77.97 4808.98 775.22 14040.53 00:08:31.234 00:08:31.234 11:51:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63714 00:08:31.493 Initializing NVMe Controllers 00:08:31.493 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.493 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.493 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.493 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.493 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:31.493 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:31.493 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:31.493 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:31.493 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:31.493 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:31.493 Initialization complete. Launching workers. 00:08:31.493 ======================================================== 00:08:31.493 Latency(us) 00:08:31.493 Device Information : IOPS MiB/s Average min max 00:08:31.493 PCIE (0000:00:11.0) NSID 1 from core 1: 7839.25 30.62 2040.59 906.27 6166.46 00:08:31.493 PCIE (0000:00:13.0) NSID 1 from core 1: 7839.25 30.62 2040.61 1001.00 6192.07 00:08:31.493 PCIE (0000:00:10.0) NSID 1 from core 1: 7839.25 30.62 2039.63 990.93 6325.89 00:08:31.493 PCIE (0000:00:12.0) NSID 1 from core 1: 7839.25 30.62 2040.52 1032.58 6505.71 00:08:31.493 PCIE (0000:00:12.0) NSID 2 from core 1: 7839.25 30.62 2040.48 1021.63 6226.80 00:08:31.493 PCIE (0000:00:12.0) NSID 3 from core 1: 7839.25 30.62 2040.53 901.51 6369.26 00:08:31.493 ======================================================== 00:08:31.493 Total : 47035.52 183.73 2040.39 901.51 6505.71 00:08:31.493 00:08:33.393 Initializing NVMe Controllers 00:08:33.393 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.393 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.393 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.393 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.393 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:33.393 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:33.393 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:33.393 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:33.393 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:33.393 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:33.393 Initialization complete. Launching workers. 00:08:33.393 ======================================================== 00:08:33.393 Latency(us) 00:08:33.393 Device Information : IOPS MiB/s Average min max 00:08:33.393 PCIE (0000:00:11.0) NSID 1 from core 0: 11285.17 44.08 1417.42 703.02 5246.20 00:08:33.393 PCIE (0000:00:13.0) NSID 1 from core 0: 11285.17 44.08 1417.40 695.73 5225.89 00:08:33.393 PCIE (0000:00:10.0) NSID 1 from core 0: 11285.17 44.08 1416.54 682.72 5589.64 00:08:33.393 PCIE (0000:00:12.0) NSID 1 from core 0: 11285.17 44.08 1417.35 701.93 5489.42 00:08:33.393 PCIE (0000:00:12.0) NSID 2 from core 0: 11285.17 44.08 1417.33 601.93 5828.18 00:08:33.394 PCIE (0000:00:12.0) NSID 3 from core 0: 11285.17 44.08 1417.31 570.48 5548.54 00:08:33.394 ======================================================== 00:08:33.394 Total : 67711.01 264.50 1417.22 570.48 5828.18 00:08:33.394 00:08:33.394 11:51:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63715 00:08:33.394 11:51:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63790 00:08:33.394 11:51:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:33.394 11:51:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63791 00:08:33.394 11:51:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:33.394 11:51:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:36.673 Initializing NVMe Controllers 00:08:36.673 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.673 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.673 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.673 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.673 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:36.673 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:36.673 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:36.673 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:36.673 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:36.673 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:36.673 Initialization complete. Launching workers. 00:08:36.673 ======================================================== 00:08:36.673 Latency(us) 00:08:36.673 Device Information : IOPS MiB/s Average min max 00:08:36.673 PCIE (0000:00:11.0) NSID 1 from core 1: 8186.28 31.98 1954.11 732.12 6541.70 00:08:36.673 PCIE (0000:00:13.0) NSID 1 from core 1: 8186.28 31.98 1953.94 721.04 6653.56 00:08:36.673 PCIE (0000:00:10.0) NSID 1 from core 1: 8186.28 31.98 1953.22 699.42 6525.12 00:08:36.673 PCIE (0000:00:12.0) NSID 1 from core 1: 8186.28 31.98 1954.12 718.65 6521.71 00:08:36.673 PCIE (0000:00:12.0) NSID 2 from core 1: 8186.28 31.98 1954.09 728.50 6515.47 00:08:36.673 PCIE (0000:00:12.0) NSID 3 from core 1: 8186.28 31.98 1954.19 733.09 6512.69 00:08:36.673 ======================================================== 00:08:36.673 Total : 49117.69 191.87 1953.94 699.42 6653.56 00:08:36.673 00:08:36.673 Initializing NVMe Controllers 00:08:36.673 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.673 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.673 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.673 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.673 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.673 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.673 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.673 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.673 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.673 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.673 Initialization complete. Launching workers. 00:08:36.673 ======================================================== 00:08:36.673 Latency(us) 00:08:36.673 Device Information : IOPS MiB/s Average min max 00:08:36.673 PCIE (0000:00:11.0) NSID 1 from core 0: 8061.71 31.49 1984.27 734.08 6020.62 00:08:36.673 PCIE (0000:00:13.0) NSID 1 from core 0: 8061.71 31.49 1984.30 728.18 6155.95 00:08:36.673 PCIE (0000:00:10.0) NSID 1 from core 0: 8061.71 31.49 1983.31 700.94 6085.02 00:08:36.673 PCIE (0000:00:12.0) NSID 1 from core 0: 8061.71 31.49 1984.20 719.67 6018.89 00:08:36.673 PCIE (0000:00:12.0) NSID 2 from core 0: 8061.71 31.49 1984.22 716.19 6085.98 00:08:36.673 PCIE (0000:00:12.0) NSID 3 from core 0: 8061.71 31.49 1984.14 728.98 5556.53 00:08:36.673 ======================================================== 00:08:36.673 Total : 48370.26 188.95 1984.07 700.94 6155.95 00:08:36.673 00:08:39.206 Initializing NVMe Controllers 00:08:39.206 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.206 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.206 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.206 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.206 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:39.206 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:39.206 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:39.207 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:39.207 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:39.207 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:39.207 Initialization complete. Launching workers. 00:08:39.207 ======================================================== 00:08:39.207 Latency(us) 00:08:39.207 Device Information : IOPS MiB/s Average min max 00:08:39.207 PCIE (0000:00:11.0) NSID 1 from core 2: 4786.24 18.70 3342.39 760.91 12113.43 00:08:39.207 PCIE (0000:00:13.0) NSID 1 from core 2: 4786.24 18.70 3342.34 741.71 12925.06 00:08:39.207 PCIE (0000:00:10.0) NSID 1 from core 2: 4786.24 18.70 3341.13 729.23 12820.66 00:08:39.207 PCIE (0000:00:12.0) NSID 1 from core 2: 4786.24 18.70 3342.54 713.43 12209.33 00:08:39.207 PCIE (0000:00:12.0) NSID 2 from core 2: 4786.24 18.70 3342.48 750.90 12689.47 00:08:39.207 PCIE (0000:00:12.0) NSID 3 from core 2: 4786.24 18.70 3342.28 746.16 12243.46 00:08:39.207 ======================================================== 00:08:39.207 Total : 28717.43 112.18 3342.19 713.43 12925.06 00:08:39.207 00:08:39.207 ************************************ 00:08:39.207 END TEST nvme_multi_secondary 00:08:39.207 ************************************ 00:08:39.207 11:51:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63790 00:08:39.207 11:51:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63791 00:08:39.207 00:08:39.207 real 0m10.766s 00:08:39.207 user 0m18.395s 00:08:39.207 sys 0m0.604s 00:08:39.207 11:51:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.207 11:51:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:39.207 11:51:36 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:39.207 11:51:36 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:39.207 11:51:36 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62747 ]] 00:08:39.207 11:51:36 nvme -- common/autotest_common.sh@1092 -- # kill 62747 00:08:39.207 11:51:36 nvme -- common/autotest_common.sh@1093 -- # wait 62747 00:08:39.207 [2024-11-18 11:51:36.379003] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.379073] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.379101] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.379118] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.381274] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.381327] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.381345] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.381363] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.383438] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.383486] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.383501] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.383517] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.385656] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.385705] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.385722] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 [2024-11-18 11:51:36.385740] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63663) is not found. Dropping the request. 00:08:39.207 11:51:36 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:08:39.207 11:51:36 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:08:39.207 11:51:36 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:39.207 11:51:36 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:39.207 11:51:36 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.207 11:51:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.207 ************************************ 00:08:39.207 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:39.207 ************************************ 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:39.207 * Looking for test storage... 00:08:39.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.207 --rc genhtml_branch_coverage=1 00:08:39.207 --rc genhtml_function_coverage=1 00:08:39.207 --rc genhtml_legend=1 00:08:39.207 --rc geninfo_all_blocks=1 00:08:39.207 --rc geninfo_unexecuted_blocks=1 00:08:39.207 00:08:39.207 ' 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.207 --rc genhtml_branch_coverage=1 00:08:39.207 --rc genhtml_function_coverage=1 00:08:39.207 --rc genhtml_legend=1 00:08:39.207 --rc geninfo_all_blocks=1 00:08:39.207 --rc geninfo_unexecuted_blocks=1 00:08:39.207 00:08:39.207 ' 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.207 --rc genhtml_branch_coverage=1 00:08:39.207 --rc genhtml_function_coverage=1 00:08:39.207 --rc genhtml_legend=1 00:08:39.207 --rc geninfo_all_blocks=1 00:08:39.207 --rc geninfo_unexecuted_blocks=1 00:08:39.207 00:08:39.207 ' 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.207 --rc genhtml_branch_coverage=1 00:08:39.207 --rc genhtml_function_coverage=1 00:08:39.207 --rc genhtml_legend=1 00:08:39.207 --rc geninfo_all_blocks=1 00:08:39.207 --rc geninfo_unexecuted_blocks=1 00:08:39.207 00:08:39.207 ' 00:08:39.207 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:39.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63951 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63951 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 63951 ']' 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.208 11:51:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:39.208 [2024-11-18 11:51:36.815196] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:08:39.208 [2024-11-18 11:51:36.815325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63951 ] 00:08:39.467 [2024-11-18 11:51:36.982922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.467 [2024-11-18 11:51:37.071637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.467 [2024-11-18 11:51:37.071687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.467 [2024-11-18 11:51:37.071738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.467 [2024-11-18 11:51:37.071796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:40.032 nvme0n1 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_mtFek.txt 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:40.032 true 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731930697 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63970 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:40.032 11:51:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.558 [2024-11-18 11:51:39.715624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:42.558 [2024-11-18 11:51:39.715854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:42.558 [2024-11-18 11:51:39.715874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:42.558 [2024-11-18 11:51:39.715885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.558 [2024-11-18 11:51:39.717551] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:42.558 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63970 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63970 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63970 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_mtFek.txt 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:42.558 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_mtFek.txt 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63951 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 63951 ']' 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 63951 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63951 00:08:42.559 killing process with pid 63951 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63951' 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 63951 00:08:42.559 11:51:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 63951 00:08:43.495 11:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:43.495 11:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:43.495 00:08:43.495 real 0m4.474s 00:08:43.495 user 0m15.812s 00:08:43.495 sys 0m0.488s 00:08:43.495 ************************************ 00:08:43.495 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:43.495 ************************************ 00:08:43.495 11:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:43.495 11:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.495 11:51:41 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:43.495 11:51:41 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:43.495 11:51:41 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:43.495 11:51:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:43.495 11:51:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.495 ************************************ 00:08:43.495 START TEST nvme_fio 00:08:43.495 ************************************ 00:08:43.495 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:08:43.495 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:43.495 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:43.495 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:43.495 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:43.495 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:43.495 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:43.495 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.495 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:43.495 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:43.495 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:43.495 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:43.495 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:43.495 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:43.495 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.495 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:43.752 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.752 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:44.009 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:44.009 11:51:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:44.010 11:51:41 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:44.267 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:44.267 fio-3.35 00:08:44.267 Starting 1 thread 00:08:50.832 00:08:50.832 test: (groupid=0, jobs=1): err= 0: pid=64104: Mon Nov 18 11:51:47 2024 00:08:50.832 read: IOPS=25.1k, BW=98.1MiB/s (103MB/s)(196MiB/2001msec) 00:08:50.832 slat (usec): min=3, max=417, avg= 4.86, stdev= 2.93 00:08:50.832 clat (usec): min=307, max=7994, avg=2543.57, stdev=661.67 00:08:50.832 lat (usec): min=311, max=8014, avg=2548.43, stdev=662.84 00:08:50.832 clat percentiles (usec): 00:08:50.832 | 1.00th=[ 1549], 5.00th=[ 2040], 10.00th=[ 2212], 20.00th=[ 2311], 00:08:50.832 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:08:50.832 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2933], 95.00th=[ 3949], 00:08:50.832 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6325], 99.95th=[ 6390], 00:08:50.832 | 99.99th=[ 7701] 00:08:50.832 bw ( KiB/s): min=97784, max=102384, per=99.81%, avg=100272.00, stdev=2322.94, samples=3 00:08:50.832 iops : min=24446, max=25596, avg=25068.00, stdev=580.73, samples=3 00:08:50.832 write: IOPS=25.0k, BW=97.5MiB/s (102MB/s)(195MiB/2001msec); 0 zone resets 00:08:50.832 slat (nsec): min=3428, max=67437, avg=5089.24, stdev=1952.33 00:08:50.832 clat (usec): min=228, max=7945, avg=2549.38, stdev=677.22 00:08:50.832 lat (usec): min=232, max=7951, avg=2554.47, stdev=678.38 00:08:50.832 clat percentiles (usec): 00:08:50.832 | 1.00th=[ 1532], 5.00th=[ 2040], 10.00th=[ 2212], 20.00th=[ 2311], 00:08:50.832 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2409], 00:08:50.832 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 2966], 95.00th=[ 4015], 00:08:50.832 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 6325], 99.95th=[ 6652], 00:08:50.832 | 99.99th=[ 7701] 00:08:50.832 bw ( KiB/s): min=97448, max=102560, per=100.00%, avg=100301.33, stdev=2607.37, samples=3 00:08:50.832 iops : min=24362, max=25640, avg=25075.33, stdev=651.84, samples=3 00:08:50.832 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.09% 00:08:50.832 lat (msec) : 2=4.29%, 4=90.65%, 10=4.95% 00:08:50.832 cpu : usr=99.00%, sys=0.15%, ctx=14, majf=0, minf=607 00:08:50.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:50.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.832 issued rwts: total=50259,49970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.832 00:08:50.832 Run status group 0 (all jobs): 00:08:50.832 READ: bw=98.1MiB/s (103MB/s), 98.1MiB/s-98.1MiB/s (103MB/s-103MB/s), io=196MiB (206MB), run=2001-2001msec 00:08:50.832 WRITE: bw=97.5MiB/s (102MB/s), 97.5MiB/s-97.5MiB/s (102MB/s-102MB/s), io=195MiB (205MB), run=2001-2001msec 00:08:50.832 ----------------------------------------------------- 00:08:50.832 Suppressions used: 00:08:50.832 count bytes template 00:08:50.832 1 32 /usr/src/fio/parse.c 00:08:50.832 1 8 libtcmalloc_minimal.so 00:08:50.832 ----------------------------------------------------- 00:08:50.832 00:08:50.832 11:51:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:50.832 11:51:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:50.832 11:51:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:50.832 11:51:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:50.832 11:51:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:50.833 11:51:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:50.833 11:51:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:50.833 11:51:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:50.833 11:51:48 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:51.093 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.094 fio-3.35 00:08:51.094 Starting 1 thread 00:08:57.699 00:08:57.699 test: (groupid=0, jobs=1): err= 0: pid=64165: Mon Nov 18 11:51:54 2024 00:08:57.699 read: IOPS=21.4k, BW=83.5MiB/s (87.5MB/s)(167MiB/2001msec) 00:08:57.699 slat (nsec): min=3383, max=66652, avg=5055.65, stdev=2299.89 00:08:57.699 clat (usec): min=334, max=10082, avg=2984.74, stdev=1052.86 00:08:57.699 lat (usec): min=338, max=10149, avg=2989.80, stdev=1053.98 00:08:57.699 clat percentiles (usec): 00:08:57.699 | 1.00th=[ 1598], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2343], 00:08:57.699 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2737], 00:08:57.699 | 70.00th=[ 2933], 80.00th=[ 3392], 90.00th=[ 4686], 95.00th=[ 5342], 00:08:57.699 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8586], 99.95th=[ 8848], 00:08:57.699 | 99.99th=[ 9241] 00:08:57.699 bw ( KiB/s): min=76600, max=92048, per=100.00%, avg=86212.00, stdev=8387.72, samples=3 00:08:57.699 iops : min=19150, max=23012, avg=21553.00, stdev=2096.93, samples=3 00:08:57.699 write: IOPS=21.2k, BW=82.8MiB/s (86.9MB/s)(166MiB/2001msec); 0 zone resets 00:08:57.699 slat (nsec): min=3471, max=73639, avg=5182.30, stdev=2443.07 00:08:57.699 clat (usec): min=342, max=9437, avg=3002.34, stdev=1041.88 00:08:57.699 lat (usec): min=346, max=9449, avg=3007.52, stdev=1042.98 00:08:57.699 clat percentiles (usec): 00:08:57.699 | 1.00th=[ 1631], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2343], 00:08:57.699 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2769], 00:08:57.699 | 70.00th=[ 2966], 80.00th=[ 3425], 90.00th=[ 4686], 95.00th=[ 5342], 00:08:57.699 | 99.00th=[ 6456], 99.50th=[ 6980], 99.90th=[ 8455], 99.95th=[ 8848], 00:08:57.699 | 99.99th=[ 9241] 00:08:57.699 bw ( KiB/s): min=78024, max=91536, per=100.00%, avg=86319.00, stdev=7262.86, samples=3 00:08:57.699 iops : min=19506, max=22884, avg=21579.67, stdev=1815.66, samples=3 00:08:57.699 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.06% 00:08:57.699 lat (msec) : 2=2.77%, 4=82.14%, 10=15.01%, 20=0.01% 00:08:57.699 cpu : usr=99.10%, sys=0.10%, ctx=6, majf=0, minf=607 00:08:57.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.699 issued rwts: total=42754,42434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.699 00:08:57.699 Run status group 0 (all jobs): 00:08:57.699 READ: bw=83.5MiB/s (87.5MB/s), 83.5MiB/s-83.5MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:08:57.699 WRITE: bw=82.8MiB/s (86.9MB/s), 82.8MiB/s-82.8MiB/s (86.9MB/s-86.9MB/s), io=166MiB (174MB), run=2001-2001msec 00:08:57.699 ----------------------------------------------------- 00:08:57.699 Suppressions used: 00:08:57.699 count bytes template 00:08:57.699 1 32 /usr/src/fio/parse.c 00:08:57.699 1 8 libtcmalloc_minimal.so 00:08:57.699 ----------------------------------------------------- 00:08:57.699 00:08:57.699 11:51:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.699 11:51:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.699 11:51:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.699 11:51:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:57.699 11:51:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:57.699 11:51:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.699 11:51:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.699 11:51:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.699 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:57.700 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.700 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.700 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:57.700 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.700 11:51:55 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:57.700 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.700 fio-3.35 00:08:57.700 Starting 1 thread 00:09:04.278 00:09:04.278 test: (groupid=0, jobs=1): err= 0: pid=64227: Mon Nov 18 11:52:01 2024 00:09:04.278 read: IOPS=20.0k, BW=78.1MiB/s (81.9MB/s)(156MiB/2002msec) 00:09:04.278 slat (nsec): min=3345, max=69976, avg=5336.94, stdev=2500.34 00:09:04.278 clat (usec): min=593, max=8600, avg=3184.67, stdev=1090.73 00:09:04.278 lat (usec): min=597, max=8605, avg=3190.01, stdev=1091.81 00:09:04.278 clat percentiles (usec): 00:09:04.278 | 1.00th=[ 1663], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:04.278 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 3032], 00:09:04.278 | 70.00th=[ 3261], 80.00th=[ 3818], 90.00th=[ 4883], 95.00th=[ 5604], 00:09:04.278 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7701], 99.95th=[ 7898], 00:09:04.278 | 99.99th=[ 8356] 00:09:04.278 bw ( KiB/s): min=75536, max=83232, per=99.95%, avg=79906.00, stdev=3199.91, samples=4 00:09:04.278 iops : min=18884, max=20808, avg=19976.50, stdev=799.98, samples=4 00:09:04.278 write: IOPS=19.9k, BW=77.9MiB/s (81.7MB/s)(156MiB/2002msec); 0 zone resets 00:09:04.278 slat (nsec): min=3407, max=72726, avg=5437.73, stdev=2561.03 00:09:04.278 clat (usec): min=625, max=8725, avg=3203.60, stdev=1087.37 00:09:04.278 lat (usec): min=628, max=8740, avg=3209.04, stdev=1088.46 00:09:04.278 clat percentiles (usec): 00:09:04.278 | 1.00th=[ 1647], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:09:04.278 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 3064], 00:09:04.278 | 70.00th=[ 3294], 80.00th=[ 3785], 90.00th=[ 4883], 95.00th=[ 5604], 00:09:04.278 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 7767], 99.95th=[ 8029], 00:09:04.278 | 99.99th=[ 8586] 00:09:04.278 bw ( KiB/s): min=74480, max=83392, per=99.93%, avg=79728.00, stdev=3752.30, samples=4 00:09:04.278 iops : min=18620, max=20848, avg=19932.00, stdev=938.08, samples=4 00:09:04.278 lat (usec) : 750=0.01%, 1000=0.04% 00:09:04.278 lat (msec) : 2=2.63%, 4=79.26%, 10=18.06% 00:09:04.278 cpu : usr=99.00%, sys=0.15%, ctx=6, majf=0, minf=607 00:09:04.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:04.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.278 issued rwts: total=40013,39932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.278 00:09:04.278 Run status group 0 (all jobs): 00:09:04.278 READ: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=156MiB (164MB), run=2002-2002msec 00:09:04.278 WRITE: bw=77.9MiB/s (81.7MB/s), 77.9MiB/s-77.9MiB/s (81.7MB/s-81.7MB/s), io=156MiB (164MB), run=2002-2002msec 00:09:04.278 ----------------------------------------------------- 00:09:04.278 Suppressions used: 00:09:04.278 count bytes template 00:09:04.278 1 32 /usr/src/fio/parse.c 00:09:04.278 1 8 libtcmalloc_minimal.so 00:09:04.278 ----------------------------------------------------- 00:09:04.278 00:09:04.278 11:52:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:04.278 11:52:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:04.278 11:52:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:04.278 11:52:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:04.278 11:52:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:04.278 11:52:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:04.278 11:52:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:04.278 11:52:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:04.278 11:52:01 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:04.278 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:04.278 fio-3.35 00:09:04.278 Starting 1 thread 00:09:12.413 00:09:12.413 test: (groupid=0, jobs=1): err= 0: pid=64288: Mon Nov 18 11:52:10 2024 00:09:12.413 read: IOPS=19.6k, BW=76.7MiB/s (80.5MB/s)(154MiB/2001msec) 00:09:12.413 slat (nsec): min=3355, max=74123, avg=5380.22, stdev=2808.68 00:09:12.413 clat (usec): min=452, max=13544, avg=3234.28, stdev=1269.98 00:09:12.413 lat (usec): min=462, max=13588, avg=3239.66, stdev=1271.31 00:09:12.413 clat percentiles (usec): 00:09:12.413 | 1.00th=[ 1450], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2343], 00:09:12.413 | 30.00th=[ 2442], 40.00th=[ 2606], 50.00th=[ 2769], 60.00th=[ 2999], 00:09:12.413 | 70.00th=[ 3359], 80.00th=[ 4228], 90.00th=[ 5145], 95.00th=[ 5866], 00:09:12.413 | 99.00th=[ 7177], 99.50th=[ 7701], 99.90th=[10159], 99.95th=[12125], 00:09:12.413 | 99.99th=[13435] 00:09:12.413 bw ( KiB/s): min=72368, max=89536, per=100.00%, avg=79408.00, stdev=8990.93, samples=3 00:09:12.413 iops : min=18092, max=22384, avg=19852.00, stdev=2247.73, samples=3 00:09:12.413 write: IOPS=19.6k, BW=76.6MiB/s (80.3MB/s)(153MiB/2001msec); 0 zone resets 00:09:12.413 slat (usec): min=3, max=171, avg= 5.50, stdev= 2.94 00:09:12.413 clat (usec): min=471, max=13479, avg=3262.35, stdev=1276.27 00:09:12.413 lat (usec): min=482, max=13492, avg=3267.85, stdev=1277.51 00:09:12.413 clat percentiles (usec): 00:09:12.413 | 1.00th=[ 1467], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2376], 00:09:12.413 | 30.00th=[ 2474], 40.00th=[ 2638], 50.00th=[ 2802], 60.00th=[ 3032], 00:09:12.413 | 70.00th=[ 3392], 80.00th=[ 4228], 90.00th=[ 5145], 95.00th=[ 5866], 00:09:12.413 | 99.00th=[ 7308], 99.50th=[ 7832], 99.90th=[10159], 99.95th=[12649], 00:09:12.413 | 99.99th=[13435] 00:09:12.413 bw ( KiB/s): min=72368, max=89656, per=100.00%, avg=79442.67, stdev=9061.30, samples=3 00:09:12.413 iops : min=18092, max=22414, avg=19860.67, stdev=2265.32, samples=3 00:09:12.413 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.06% 00:09:12.413 lat (msec) : 2=3.74%, 4=73.73%, 10=22.34%, 20=0.12% 00:09:12.413 cpu : usr=98.75%, sys=0.25%, ctx=3, majf=0, minf=605 00:09:12.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:12.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.413 issued rwts: total=39307,39246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.413 00:09:12.413 Run status group 0 (all jobs): 00:09:12.413 READ: bw=76.7MiB/s (80.5MB/s), 76.7MiB/s-76.7MiB/s (80.5MB/s-80.5MB/s), io=154MiB (161MB), run=2001-2001msec 00:09:12.413 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=153MiB (161MB), run=2001-2001msec 00:09:12.673 ----------------------------------------------------- 00:09:12.673 Suppressions used: 00:09:12.673 count bytes template 00:09:12.673 1 32 /usr/src/fio/parse.c 00:09:12.673 1 8 libtcmalloc_minimal.so 00:09:12.673 ----------------------------------------------------- 00:09:12.673 00:09:12.673 11:52:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:12.673 11:52:10 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:12.673 00:09:12.673 real 0m29.213s 00:09:12.673 user 0m16.421s 00:09:12.673 sys 0m23.527s 00:09:12.673 11:52:10 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.673 ************************************ 00:09:12.673 END TEST nvme_fio 00:09:12.673 ************************************ 00:09:12.673 11:52:10 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:12.673 00:09:12.673 real 1m38.244s 00:09:12.673 user 3m36.280s 00:09:12.673 sys 0m33.849s 00:09:12.673 11:52:10 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.673 ************************************ 00:09:12.674 END TEST nvme 00:09:12.674 ************************************ 00:09:12.674 11:52:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:12.674 11:52:10 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:12.674 11:52:10 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:12.674 11:52:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:12.674 11:52:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:12.674 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.674 ************************************ 00:09:12.674 START TEST nvme_scc 00:09:12.674 ************************************ 00:09:12.674 11:52:10 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:12.935 * Looking for test storage... 00:09:12.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.935 11:52:10 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:12.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.935 --rc genhtml_branch_coverage=1 00:09:12.935 --rc genhtml_function_coverage=1 00:09:12.935 --rc genhtml_legend=1 00:09:12.935 --rc geninfo_all_blocks=1 00:09:12.935 --rc geninfo_unexecuted_blocks=1 00:09:12.935 00:09:12.935 ' 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:12.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.935 --rc genhtml_branch_coverage=1 00:09:12.935 --rc genhtml_function_coverage=1 00:09:12.935 --rc genhtml_legend=1 00:09:12.935 --rc geninfo_all_blocks=1 00:09:12.935 --rc geninfo_unexecuted_blocks=1 00:09:12.935 00:09:12.935 ' 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:12.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.935 --rc genhtml_branch_coverage=1 00:09:12.935 --rc genhtml_function_coverage=1 00:09:12.935 --rc genhtml_legend=1 00:09:12.935 --rc geninfo_all_blocks=1 00:09:12.935 --rc geninfo_unexecuted_blocks=1 00:09:12.935 00:09:12.935 ' 00:09:12.935 11:52:10 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:12.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.935 --rc genhtml_branch_coverage=1 00:09:12.935 --rc genhtml_function_coverage=1 00:09:12.935 --rc genhtml_legend=1 00:09:12.935 --rc geninfo_all_blocks=1 00:09:12.935 --rc geninfo_unexecuted_blocks=1 00:09:12.935 00:09:12.935 ' 00:09:12.935 11:52:10 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:12.935 11:52:10 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:12.935 11:52:10 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:12.935 11:52:10 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:12.935 11:52:10 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.936 11:52:10 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.936 11:52:10 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.936 11:52:10 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.936 11:52:10 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.936 11:52:10 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.936 11:52:10 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.936 11:52:10 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.936 11:52:10 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:12.936 11:52:10 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:12.936 11:52:10 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:12.936 11:52:10 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.936 11:52:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:12.936 11:52:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:12.936 11:52:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:12.936 11:52:10 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:13.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.458 Waiting for block devices as requested 00:09:13.458 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.458 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.719 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.719 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.015 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:19.015 11:52:16 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:19.015 11:52:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.015 11:52:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:19.015 11:52:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.015 11:52:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.015 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:19.016 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.017 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.018 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.019 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:19.020 11:52:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.020 11:52:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:19.020 11:52:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.020 11:52:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:19.020 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.021 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.022 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.023 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:19.024 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:19.025 11:52:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.025 11:52:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:19.025 11:52:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.025 11:52:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.025 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:19.026 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.027 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.028 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.029 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:19.030 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.031 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:19.032 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.033 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:19.034 11:52:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:19.034 11:52:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:19.034 11:52:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.034 11:52:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:19.034 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.035 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.036 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:19.037 11:52:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:19.037 11:52:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:19.038 11:52:16 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:19.038 11:52:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:19.038 11:52:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:19.038 11:52:16 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:19.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.181 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.181 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.181 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.181 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.181 11:52:17 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:20.181 11:52:17 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:20.181 11:52:17 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.181 11:52:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 ************************************ 00:09:20.181 START TEST nvme_simple_copy 00:09:20.181 ************************************ 00:09:20.181 11:52:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:20.443 Initializing NVMe Controllers 00:09:20.443 Attaching to 0000:00:10.0 00:09:20.443 Controller supports SCC. Attached to 0000:00:10.0 00:09:20.443 Namespace ID: 1 size: 6GB 00:09:20.443 Initialization complete. 00:09:20.443 00:09:20.443 Controller QEMU NVMe Ctrl (12340 ) 00:09:20.443 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:20.443 Namespace Block Size:4096 00:09:20.443 Writing LBAs 0 to 63 with Random Data 00:09:20.443 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:20.443 LBAs matching Written Data: 64 00:09:20.443 00:09:20.443 real 0m0.265s 00:09:20.443 user 0m0.101s 00:09:20.443 sys 0m0.063s 00:09:20.443 11:52:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:20.443 ************************************ 00:09:20.443 END TEST nvme_simple_copy 00:09:20.443 ************************************ 00:09:20.443 11:52:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:20.443 00:09:20.443 real 0m7.691s 00:09:20.443 user 0m1.032s 00:09:20.443 sys 0m1.368s 00:09:20.443 ************************************ 00:09:20.443 11:52:18 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:20.443 11:52:18 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:20.443 END TEST nvme_scc 00:09:20.443 ************************************ 00:09:20.443 11:52:18 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:20.443 11:52:18 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:20.443 11:52:18 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:20.443 11:52:18 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:20.443 11:52:18 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:20.443 11:52:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:20.443 11:52:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.443 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:09:20.443 ************************************ 00:09:20.443 START TEST nvme_fdp 00:09:20.443 ************************************ 00:09:20.443 11:52:18 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:09:20.705 * Looking for test storage... 00:09:20.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.705 --rc genhtml_branch_coverage=1 00:09:20.705 --rc genhtml_function_coverage=1 00:09:20.705 --rc genhtml_legend=1 00:09:20.705 --rc geninfo_all_blocks=1 00:09:20.705 --rc geninfo_unexecuted_blocks=1 00:09:20.705 00:09:20.705 ' 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.705 --rc genhtml_branch_coverage=1 00:09:20.705 --rc genhtml_function_coverage=1 00:09:20.705 --rc genhtml_legend=1 00:09:20.705 --rc geninfo_all_blocks=1 00:09:20.705 --rc geninfo_unexecuted_blocks=1 00:09:20.705 00:09:20.705 ' 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.705 --rc genhtml_branch_coverage=1 00:09:20.705 --rc genhtml_function_coverage=1 00:09:20.705 --rc genhtml_legend=1 00:09:20.705 --rc geninfo_all_blocks=1 00:09:20.705 --rc geninfo_unexecuted_blocks=1 00:09:20.705 00:09:20.705 ' 00:09:20.705 11:52:18 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:20.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.705 --rc genhtml_branch_coverage=1 00:09:20.705 --rc genhtml_function_coverage=1 00:09:20.705 --rc genhtml_legend=1 00:09:20.705 --rc geninfo_all_blocks=1 00:09:20.705 --rc geninfo_unexecuted_blocks=1 00:09:20.705 00:09:20.705 ' 00:09:20.705 11:52:18 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.705 11:52:18 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.705 11:52:18 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.705 11:52:18 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.705 11:52:18 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.705 11:52:18 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:20.705 11:52:18 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:20.705 11:52:18 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:20.705 11:52:18 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:20.705 11:52:18 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:20.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.228 Waiting for block devices as requested 00:09:21.228 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.228 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.228 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.488 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.841 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:26.841 11:52:24 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:26.841 11:52:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:26.841 11:52:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:26.841 11:52:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.841 11:52:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.841 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.842 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.843 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:26.844 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:26.845 11:52:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:26.845 11:52:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:26.845 11:52:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:26.846 11:52:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.846 11:52:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.846 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.847 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.848 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:26.849 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:26.850 11:52:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:26.850 11:52:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:26.850 11:52:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.850 11:52:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:26.850 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:26.851 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:26.852 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.853 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:26.854 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.855 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.856 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:26.857 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:26.858 11:52:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:26.858 11:52:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:26.858 11:52:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:26.858 11:52:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.858 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.859 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:26.860 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:26.861 11:52:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:26.861 11:52:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:26.862 11:52:24 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:26.862 11:52:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:26.862 11:52:24 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:26.862 11:52:24 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:27.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.708 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.708 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.708 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.708 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.708 11:52:25 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:27.708 11:52:25 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:27.708 11:52:25 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.708 11:52:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:27.708 ************************************ 00:09:27.708 START TEST nvme_flexible_data_placement 00:09:27.708 ************************************ 00:09:27.708 11:52:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:27.969 Initializing NVMe Controllers 00:09:27.970 Attaching to 0000:00:13.0 00:09:27.970 Controller supports FDP Attached to 0000:00:13.0 00:09:27.970 Namespace ID: 1 Endurance Group ID: 1 00:09:27.970 Initialization complete. 00:09:27.970 00:09:27.970 ================================== 00:09:27.970 == FDP tests for Namespace: #01 == 00:09:27.970 ================================== 00:09:27.970 00:09:27.970 Get Feature: FDP: 00:09:27.970 ================= 00:09:27.970 Enabled: Yes 00:09:27.970 FDP configuration Index: 0 00:09:27.970 00:09:27.970 FDP configurations log page 00:09:27.970 =========================== 00:09:27.970 Number of FDP configurations: 1 00:09:27.970 Version: 0 00:09:27.970 Size: 112 00:09:27.970 FDP Configuration Descriptor: 0 00:09:27.970 Descriptor Size: 96 00:09:27.970 Reclaim Group Identifier format: 2 00:09:27.970 FDP Volatile Write Cache: Not Present 00:09:27.970 FDP Configuration: Valid 00:09:27.970 Vendor Specific Size: 0 00:09:27.970 Number of Reclaim Groups: 2 00:09:27.970 Number of Recalim Unit Handles: 8 00:09:27.970 Max Placement Identifiers: 128 00:09:27.970 Number of Namespaces Suppprted: 256 00:09:27.970 Reclaim unit Nominal Size: 6000000 bytes 00:09:27.970 Estimated Reclaim Unit Time Limit: Not Reported 00:09:27.970 RUH Desc #000: RUH Type: Initially Isolated 00:09:27.970 RUH Desc #001: RUH Type: Initially Isolated 00:09:27.970 RUH Desc #002: RUH Type: Initially Isolated 00:09:27.970 RUH Desc #003: RUH Type: Initially Isolated 00:09:27.970 RUH Desc #004: RUH Type: Initially Isolated 00:09:27.970 RUH Desc #005: RUH Type: Initially Isolated 00:09:27.970 RUH Desc #006: RUH Type: Initially Isolated 00:09:27.970 RUH Desc #007: RUH Type: Initially Isolated 00:09:27.970 00:09:27.970 FDP reclaim unit handle usage log page 00:09:27.970 ====================================== 00:09:27.970 Number of Reclaim Unit Handles: 8 00:09:27.970 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:27.970 RUH Usage Desc #001: RUH Attributes: Unused 00:09:27.970 RUH Usage Desc #002: RUH Attributes: Unused 00:09:27.970 RUH Usage Desc #003: RUH Attributes: Unused 00:09:27.970 RUH Usage Desc #004: RUH Attributes: Unused 00:09:27.970 RUH Usage Desc #005: RUH Attributes: Unused 00:09:27.970 RUH Usage Desc #006: RUH Attributes: Unused 00:09:27.970 RUH Usage Desc #007: RUH Attributes: Unused 00:09:27.970 00:09:27.970 FDP statistics log page 00:09:27.970 ======================= 00:09:27.970 Host bytes with metadata written: 1079611392 00:09:27.970 Media bytes with metadata written: 1079881728 00:09:27.970 Media bytes erased: 0 00:09:27.970 00:09:27.970 FDP Reclaim unit handle status 00:09:27.970 ============================== 00:09:27.970 Number of RUHS descriptors: 2 00:09:27.970 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001a67 00:09:27.970 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:27.970 00:09:27.970 FDP write on placement id: 0 success 00:09:27.970 00:09:27.970 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:27.970 00:09:27.970 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:27.970 00:09:27.970 Get Feature: FDP Events for Placement handle: #0 00:09:27.970 ======================== 00:09:27.970 Number of FDP Events: 6 00:09:27.970 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:27.970 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:27.970 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:27.970 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:27.970 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:27.970 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:27.970 00:09:27.970 FDP events log page 00:09:27.970 =================== 00:09:27.970 Number of FDP events: 1 00:09:27.970 FDP Event #0: 00:09:27.970 Event Type: RU Not Written to Capacity 00:09:27.970 Placement Identifier: Valid 00:09:27.970 NSID: Valid 00:09:27.970 Location: Valid 00:09:27.970 Placement Identifier: 0 00:09:27.970 Event Timestamp: 6 00:09:27.970 Namespace Identifier: 1 00:09:27.970 Reclaim Group Identifier: 0 00:09:27.970 Reclaim Unit Handle Identifier: 0 00:09:27.970 00:09:27.970 FDP test passed 00:09:27.970 00:09:27.970 real 0m0.236s 00:09:27.970 user 0m0.063s 00:09:27.970 sys 0m0.071s 00:09:27.970 11:52:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.970 ************************************ 00:09:27.970 END TEST nvme_flexible_data_placement 00:09:27.970 ************************************ 00:09:27.970 11:52:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:28.231 00:09:28.231 real 0m7.586s 00:09:28.231 user 0m0.989s 00:09:28.231 sys 0m1.369s 00:09:28.231 11:52:25 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.231 ************************************ 00:09:28.231 END TEST nvme_fdp 00:09:28.231 ************************************ 00:09:28.231 11:52:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:28.231 11:52:25 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:28.231 11:52:25 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:28.231 11:52:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:28.231 11:52:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.231 11:52:25 -- common/autotest_common.sh@10 -- # set +x 00:09:28.231 ************************************ 00:09:28.231 START TEST nvme_rpc 00:09:28.231 ************************************ 00:09:28.231 11:52:25 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:28.231 * Looking for test storage... 00:09:28.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.231 11:52:25 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.231 11:52:25 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.231 11:52:25 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.231 11:52:25 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.231 11:52:25 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.231 11:52:25 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.231 11:52:25 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.231 11:52:25 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.231 11:52:25 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.231 11:52:25 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.231 11:52:25 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.232 11:52:25 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.232 --rc genhtml_branch_coverage=1 00:09:28.232 --rc genhtml_function_coverage=1 00:09:28.232 --rc genhtml_legend=1 00:09:28.232 --rc geninfo_all_blocks=1 00:09:28.232 --rc geninfo_unexecuted_blocks=1 00:09:28.232 00:09:28.232 ' 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.232 --rc genhtml_branch_coverage=1 00:09:28.232 --rc genhtml_function_coverage=1 00:09:28.232 --rc genhtml_legend=1 00:09:28.232 --rc geninfo_all_blocks=1 00:09:28.232 --rc geninfo_unexecuted_blocks=1 00:09:28.232 00:09:28.232 ' 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.232 --rc genhtml_branch_coverage=1 00:09:28.232 --rc genhtml_function_coverage=1 00:09:28.232 --rc genhtml_legend=1 00:09:28.232 --rc geninfo_all_blocks=1 00:09:28.232 --rc geninfo_unexecuted_blocks=1 00:09:28.232 00:09:28.232 ' 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.232 --rc genhtml_branch_coverage=1 00:09:28.232 --rc genhtml_function_coverage=1 00:09:28.232 --rc genhtml_legend=1 00:09:28.232 --rc geninfo_all_blocks=1 00:09:28.232 --rc geninfo_unexecuted_blocks=1 00:09:28.232 00:09:28.232 ' 00:09:28.232 11:52:25 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.232 11:52:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.232 11:52:25 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:28.492 11:52:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:28.492 11:52:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65654 00:09:28.492 11:52:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:28.492 11:52:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:28.492 11:52:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65654 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65654 ']' 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.492 11:52:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.492 [2024-11-18 11:52:26.029801] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:09:28.492 [2024-11-18 11:52:26.029918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65654 ] 00:09:28.492 [2024-11-18 11:52:26.181664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:28.753 [2024-11-18 11:52:26.278396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.753 [2024-11-18 11:52:26.278467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.324 11:52:26 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.324 11:52:26 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:29.324 11:52:26 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:29.584 Nvme0n1 00:09:29.584 11:52:27 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:29.584 11:52:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:29.844 request: 00:09:29.844 { 00:09:29.844 "bdev_name": "Nvme0n1", 00:09:29.844 "filename": "non_existing_file", 00:09:29.844 "method": "bdev_nvme_apply_firmware", 00:09:29.844 "req_id": 1 00:09:29.844 } 00:09:29.844 Got JSON-RPC error response 00:09:29.844 response: 00:09:29.844 { 00:09:29.844 "code": -32603, 00:09:29.844 "message": "open file failed." 00:09:29.844 } 00:09:29.844 11:52:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:29.844 11:52:27 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:29.844 11:52:27 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:29.844 11:52:27 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:29.844 11:52:27 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65654 00:09:29.844 11:52:27 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65654 ']' 00:09:29.844 11:52:27 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65654 00:09:29.844 11:52:27 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:09:29.844 11:52:27 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:29.844 11:52:27 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65654 00:09:30.105 11:52:27 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:30.105 killing process with pid 65654 00:09:30.105 11:52:27 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:30.105 11:52:27 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65654' 00:09:30.105 11:52:27 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65654 00:09:30.105 11:52:27 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65654 00:09:31.490 00:09:31.490 real 0m3.173s 00:09:31.490 user 0m6.095s 00:09:31.490 sys 0m0.459s 00:09:31.490 ************************************ 00:09:31.490 END TEST nvme_rpc 00:09:31.490 ************************************ 00:09:31.490 11:52:28 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.490 11:52:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.490 11:52:28 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:31.490 11:52:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:31.490 11:52:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.490 11:52:28 -- common/autotest_common.sh@10 -- # set +x 00:09:31.490 ************************************ 00:09:31.490 START TEST nvme_rpc_timeouts 00:09:31.490 ************************************ 00:09:31.490 11:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:31.490 * Looking for test storage... 00:09:31.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.490 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:31.490 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:09:31.490 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:31.490 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:31.490 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.491 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:31.491 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:31.491 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.491 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:31.491 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.491 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.491 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.491 11:52:29 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:31.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.491 --rc genhtml_branch_coverage=1 00:09:31.491 --rc genhtml_function_coverage=1 00:09:31.491 --rc genhtml_legend=1 00:09:31.491 --rc geninfo_all_blocks=1 00:09:31.491 --rc geninfo_unexecuted_blocks=1 00:09:31.491 00:09:31.491 ' 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:31.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.491 --rc genhtml_branch_coverage=1 00:09:31.491 --rc genhtml_function_coverage=1 00:09:31.491 --rc genhtml_legend=1 00:09:31.491 --rc geninfo_all_blocks=1 00:09:31.491 --rc geninfo_unexecuted_blocks=1 00:09:31.491 00:09:31.491 ' 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:31.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.491 --rc genhtml_branch_coverage=1 00:09:31.491 --rc genhtml_function_coverage=1 00:09:31.491 --rc genhtml_legend=1 00:09:31.491 --rc geninfo_all_blocks=1 00:09:31.491 --rc geninfo_unexecuted_blocks=1 00:09:31.491 00:09:31.491 ' 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:31.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.491 --rc genhtml_branch_coverage=1 00:09:31.491 --rc genhtml_function_coverage=1 00:09:31.491 --rc genhtml_legend=1 00:09:31.491 --rc geninfo_all_blocks=1 00:09:31.491 --rc geninfo_unexecuted_blocks=1 00:09:31.491 00:09:31.491 ' 00:09:31.491 11:52:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.491 11:52:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65715 00:09:31.491 11:52:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65715 00:09:31.491 11:52:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65747 00:09:31.491 11:52:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:31.491 11:52:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65747 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65747 ']' 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.491 11:52:29 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:31.491 11:52:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:31.750 [2024-11-18 11:52:29.197390] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:09:31.750 [2024-11-18 11:52:29.197507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65747 ] 00:09:31.750 [2024-11-18 11:52:29.353562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.750 [2024-11-18 11:52:29.429859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.750 [2024-11-18 11:52:29.430003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.681 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.681 Checking default timeout settings: 00:09:32.681 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:09:32.681 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:32.681 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:32.681 Making settings changes with rpc: 00:09:32.681 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:32.681 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:32.939 Check default vs. modified settings: 00:09:32.939 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:32.939 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65715 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65715 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:33.195 Setting action_on_timeout is changed as expected. 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65715 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:33.195 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65715 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:33.453 Setting timeout_us is changed as expected. 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65715 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65715 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:33.453 Setting timeout_admin_us is changed as expected. 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65715 /tmp/settings_modified_65715 00:09:33.453 11:52:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65747 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65747 ']' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65747 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65747 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:33.453 killing process with pid 65747 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65747' 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65747 00:09:33.453 11:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65747 00:09:34.832 RPC TIMEOUT SETTING TEST PASSED. 00:09:34.832 11:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:34.832 00:09:34.832 real 0m3.124s 00:09:34.832 user 0m6.125s 00:09:34.832 sys 0m0.463s 00:09:34.832 11:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.832 11:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:34.832 ************************************ 00:09:34.832 END TEST nvme_rpc_timeouts 00:09:34.832 ************************************ 00:09:34.832 11:52:32 -- spdk/autotest.sh@239 -- # uname -s 00:09:34.832 11:52:32 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:34.832 11:52:32 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.832 11:52:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:34.832 11:52:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.832 11:52:32 -- common/autotest_common.sh@10 -- # set +x 00:09:34.832 ************************************ 00:09:34.832 START TEST sw_hotplug 00:09:34.832 ************************************ 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.832 * Looking for test storage... 00:09:34.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.832 11:52:32 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.832 --rc genhtml_branch_coverage=1 00:09:34.832 --rc genhtml_function_coverage=1 00:09:34.832 --rc genhtml_legend=1 00:09:34.832 --rc geninfo_all_blocks=1 00:09:34.832 --rc geninfo_unexecuted_blocks=1 00:09:34.832 00:09:34.832 ' 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.832 --rc genhtml_branch_coverage=1 00:09:34.832 --rc genhtml_function_coverage=1 00:09:34.832 --rc genhtml_legend=1 00:09:34.832 --rc geninfo_all_blocks=1 00:09:34.832 --rc geninfo_unexecuted_blocks=1 00:09:34.832 00:09:34.832 ' 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.832 --rc genhtml_branch_coverage=1 00:09:34.832 --rc genhtml_function_coverage=1 00:09:34.832 --rc genhtml_legend=1 00:09:34.832 --rc geninfo_all_blocks=1 00:09:34.832 --rc geninfo_unexecuted_blocks=1 00:09:34.832 00:09:34.832 ' 00:09:34.832 11:52:32 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.832 --rc genhtml_branch_coverage=1 00:09:34.832 --rc genhtml_function_coverage=1 00:09:34.832 --rc genhtml_legend=1 00:09:34.832 --rc geninfo_all_blocks=1 00:09:34.832 --rc geninfo_unexecuted_blocks=1 00:09:34.832 00:09:34.832 ' 00:09:34.832 11:52:32 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:35.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.093 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.093 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.093 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.093 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:35.093 11:52:32 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:35.093 11:52:32 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:35.093 11:52:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:35.093 11:52:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.093 11:52:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:35.094 11:52:32 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.094 11:52:32 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:35.094 11:52:32 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:35.094 11:52:32 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:35.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.666 Waiting for block devices as requested 00:09:35.666 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.666 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.926 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.926 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.216 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:41.216 11:52:38 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:41.216 11:52:38 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.504 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:41.504 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.504 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:41.768 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:42.029 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.029 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:42.029 11:52:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66602 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:42.029 11:52:39 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:42.029 11:52:39 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:42.029 11:52:39 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:42.029 11:52:39 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:42.029 11:52:39 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:42.029 11:52:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:42.289 Initializing NVMe Controllers 00:09:42.289 Attaching to 0000:00:10.0 00:09:42.289 Attaching to 0000:00:11.0 00:09:42.289 Attached to 0000:00:10.0 00:09:42.289 Attached to 0000:00:11.0 00:09:42.289 Initialization complete. Starting I/O... 00:09:42.289 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:42.289 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:42.289 00:09:43.230 QEMU NVMe Ctrl (12340 ): 2768 I/Os completed (+2768) 00:09:43.230 QEMU NVMe Ctrl (12341 ): 2768 I/Os completed (+2768) 00:09:43.230 00:09:44.605 QEMU NVMe Ctrl (12340 ): 6523 I/Os completed (+3755) 00:09:44.605 QEMU NVMe Ctrl (12341 ): 6503 I/Os completed (+3735) 00:09:44.605 00:09:45.549 QEMU NVMe Ctrl (12340 ): 9746 I/Os completed (+3223) 00:09:45.549 QEMU NVMe Ctrl (12341 ): 9736 I/Os completed (+3233) 00:09:45.549 00:09:46.493 QEMU NVMe Ctrl (12340 ): 13034 I/Os completed (+3288) 00:09:46.493 QEMU NVMe Ctrl (12341 ): 13024 I/Os completed (+3288) 00:09:46.493 00:09:47.427 QEMU NVMe Ctrl (12340 ): 16554 I/Os completed (+3520) 00:09:47.427 QEMU NVMe Ctrl (12341 ): 16563 I/Os completed (+3539) 00:09:47.427 00:09:47.994 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:47.994 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.994 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.994 [2024-11-18 11:52:45.687463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:47.994 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:47.994 [2024-11-18 11:52:45.688413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.994 [2024-11-18 11:52:45.688455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.994 [2024-11-18 11:52:45.688469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.994 [2024-11-18 11:52:45.688485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.994 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:47.994 [2024-11-18 11:52:45.690058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.994 [2024-11-18 11:52:45.690099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.994 [2024-11-18 11:52:45.690112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.994 [2024-11-18 11:52:45.690124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.253 [2024-11-18 11:52:45.705729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:48.253 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:48.253 [2024-11-18 11:52:45.706565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 [2024-11-18 11:52:45.706609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 [2024-11-18 11:52:45.706627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 [2024-11-18 11:52:45.706641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:48.253 [2024-11-18 11:52:45.707977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 [2024-11-18 11:52:45.708009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 [2024-11-18 11:52:45.708021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 [2024-11-18 11:52:45.708034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.253 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:48.253 EAL: Scan for (pci) bus failed. 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:48.253 Attaching to 0000:00:10.0 00:09:48.253 Attached to 0000:00:10.0 00:09:48.253 QEMU NVMe Ctrl (12340 ): 140 I/Os completed (+140) 00:09:48.253 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.253 11:52:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:48.253 Attaching to 0000:00:11.0 00:09:48.253 Attached to 0000:00:11.0 00:09:49.195 QEMU NVMe Ctrl (12340 ): 3515 I/Os completed (+3375) 00:09:49.195 QEMU NVMe Ctrl (12341 ): 3258 I/Os completed (+3258) 00:09:49.195 00:09:50.577 QEMU NVMe Ctrl (12340 ): 6787 I/Os completed (+3272) 00:09:50.577 QEMU NVMe Ctrl (12341 ): 6530 I/Os completed (+3272) 00:09:50.577 00:09:51.511 QEMU NVMe Ctrl (12340 ): 10410 I/Os completed (+3623) 00:09:51.511 QEMU NVMe Ctrl (12341 ): 10147 I/Os completed (+3617) 00:09:51.511 00:09:52.445 QEMU NVMe Ctrl (12340 ): 14088 I/Os completed (+3678) 00:09:52.445 QEMU NVMe Ctrl (12341 ): 13833 I/Os completed (+3686) 00:09:52.445 00:09:53.380 QEMU NVMe Ctrl (12340 ): 17771 I/Os completed (+3683) 00:09:53.380 QEMU NVMe Ctrl (12341 ): 17503 I/Os completed (+3670) 00:09:53.380 00:09:54.315 QEMU NVMe Ctrl (12340 ): 21461 I/Os completed (+3690) 00:09:54.315 QEMU NVMe Ctrl (12341 ): 21192 I/Os completed (+3689) 00:09:54.315 00:09:55.337 QEMU NVMe Ctrl (12340 ): 25150 I/Os completed (+3689) 00:09:55.337 QEMU NVMe Ctrl (12341 ): 24872 I/Os completed (+3680) 00:09:55.337 00:09:56.272 QEMU NVMe Ctrl (12340 ): 28838 I/Os completed (+3688) 00:09:56.272 QEMU NVMe Ctrl (12341 ): 28557 I/Os completed (+3685) 00:09:56.272 00:09:57.207 QEMU NVMe Ctrl (12340 ): 32485 I/Os completed (+3647) 00:09:57.207 QEMU NVMe Ctrl (12341 ): 32218 I/Os completed (+3661) 00:09:57.207 00:09:58.592 QEMU NVMe Ctrl (12340 ): 35840 I/Os completed (+3355) 00:09:58.592 QEMU NVMe Ctrl (12341 ): 35586 I/Os completed (+3368) 00:09:58.592 00:09:59.531 QEMU NVMe Ctrl (12340 ): 39161 I/Os completed (+3321) 00:09:59.532 QEMU NVMe Ctrl (12341 ): 38999 I/Os completed (+3413) 00:09:59.532 00:10:00.465 QEMU NVMe Ctrl (12340 ): 42862 I/Os completed (+3701) 00:10:00.465 QEMU NVMe Ctrl (12341 ): 42697 I/Os completed (+3698) 00:10:00.465 00:10:00.465 11:52:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:00.465 11:52:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:00.465 11:52:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.465 11:52:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.465 [2024-11-18 11:52:57.924004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:00.465 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:00.465 [2024-11-18 11:52:57.924940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.924988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.925004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.925018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.465 [2024-11-18 11:52:57.926629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.926671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.926683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.926694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 11:52:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.465 11:52:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.465 [2024-11-18 11:52:57.945800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:00.465 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:00.465 [2024-11-18 11:52:57.946644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.946678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.946696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.946709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.465 [2024-11-18 11:52:57.948047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.948080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.948092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 [2024-11-18 11:52:57.948105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.465 11:52:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:00.465 11:52:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:00.465 Attaching to 0000:00:10.0 00:10:00.465 Attached to 0000:00:10.0 00:10:00.465 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.723 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.723 11:52:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.723 Attaching to 0000:00:11.0 00:10:00.723 Attached to 0000:00:11.0 00:10:01.289 QEMU NVMe Ctrl (12340 ): 2914 I/Os completed (+2914) 00:10:01.289 QEMU NVMe Ctrl (12341 ): 2607 I/Os completed (+2607) 00:10:01.289 00:10:02.223 QEMU NVMe Ctrl (12340 ): 6599 I/Os completed (+3685) 00:10:02.223 QEMU NVMe Ctrl (12341 ): 6288 I/Os completed (+3681) 00:10:02.223 00:10:03.265 QEMU NVMe Ctrl (12340 ): 10222 I/Os completed (+3623) 00:10:03.265 QEMU NVMe Ctrl (12341 ): 9862 I/Os completed (+3574) 00:10:03.265 00:10:04.206 QEMU NVMe Ctrl (12340 ): 13450 I/Os completed (+3228) 00:10:04.206 QEMU NVMe Ctrl (12341 ): 13090 I/Os completed (+3228) 00:10:04.206 00:10:05.580 QEMU NVMe Ctrl (12340 ): 17108 I/Os completed (+3658) 00:10:05.580 QEMU NVMe Ctrl (12341 ): 16750 I/Os completed (+3660) 00:10:05.580 00:10:06.515 QEMU NVMe Ctrl (12340 ): 20760 I/Os completed (+3652) 00:10:06.515 QEMU NVMe Ctrl (12341 ): 20397 I/Os completed (+3647) 00:10:06.515 00:10:07.447 QEMU NVMe Ctrl (12340 ): 24410 I/Os completed (+3650) 00:10:07.447 QEMU NVMe Ctrl (12341 ): 24047 I/Os completed (+3650) 00:10:07.447 00:10:08.378 QEMU NVMe Ctrl (12340 ): 28058 I/Os completed (+3648) 00:10:08.378 QEMU NVMe Ctrl (12341 ): 27710 I/Os completed (+3663) 00:10:08.378 00:10:09.311 QEMU NVMe Ctrl (12340 ): 31715 I/Os completed (+3657) 00:10:09.311 QEMU NVMe Ctrl (12341 ): 31387 I/Os completed (+3677) 00:10:09.311 00:10:10.250 QEMU NVMe Ctrl (12340 ): 35086 I/Os completed (+3371) 00:10:10.250 QEMU NVMe Ctrl (12341 ): 34851 I/Os completed (+3464) 00:10:10.250 00:10:11.626 QEMU NVMe Ctrl (12340 ): 38507 I/Os completed (+3421) 00:10:11.626 QEMU NVMe Ctrl (12341 ): 38269 I/Os completed (+3418) 00:10:11.626 00:10:12.191 QEMU NVMe Ctrl (12340 ): 42138 I/Os completed (+3631) 00:10:12.191 QEMU NVMe Ctrl (12341 ): 41919 I/Os completed (+3650) 00:10:12.191 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.759 [2024-11-18 11:53:10.179647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:12.759 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:12.759 [2024-11-18 11:53:10.180612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.180652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.180666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.180680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.759 [2024-11-18 11:53:10.182269] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.182311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.182323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.182336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.759 [2024-11-18 11:53:10.200216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:12.759 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:12.759 [2024-11-18 11:53:10.201077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.201111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.201127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.201139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.759 [2024-11-18 11:53:10.202468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.202503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.202517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 [2024-11-18 11:53:10.202527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.759 Attaching to 0000:00:10.0 00:10:12.759 Attached to 0000:00:10.0 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.759 11:53:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:12.759 Attaching to 0000:00:11.0 00:10:12.759 Attached to 0000:00:11.0 00:10:12.759 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.759 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.759 [2024-11-18 11:53:10.420918] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:24.987 11:53:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:24.987 11:53:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.987 11:53:22 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.73 00:10:24.987 11:53:22 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.73 00:10:24.987 11:53:22 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:24.987 11:53:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.73 00:10:24.987 11:53:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.73 2 00:10:24.987 remove_attach_helper took 42.73s to complete (handling 2 nvme drive(s)) 11:53:22 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:31.573 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66602 00:10:31.574 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66602) - No such process 00:10:31.574 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66602 00:10:31.574 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:31.574 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:31.574 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:31.574 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67155 00:10:31.574 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:31.574 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67155 00:10:31.574 11:53:28 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67155 ']' 00:10:31.574 11:53:28 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.574 11:53:28 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:31.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.574 11:53:28 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.574 11:53:28 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.574 11:53:28 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.574 11:53:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.574 [2024-11-18 11:53:28.513603] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:10:31.574 [2024-11-18 11:53:28.513757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67155 ] 00:10:31.574 [2024-11-18 11:53:28.675727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.574 [2024-11-18 11:53:28.802701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:31.835 11:53:29 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:31.835 11:53:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.402 11:53:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.402 11:53:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.402 11:53:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:38.402 11:53:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:38.402 [2024-11-18 11:53:35.588471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:38.402 [2024-11-18 11:53:35.589691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.402 [2024-11-18 11:53:35.589726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.402 [2024-11-18 11:53:35.589737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.402 [2024-11-18 11:53:35.589755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.402 [2024-11-18 11:53:35.589762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.402 [2024-11-18 11:53:35.589770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.402 [2024-11-18 11:53:35.589777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.402 [2024-11-18 11:53:35.589785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.402 [2024-11-18 11:53:35.589791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.402 [2024-11-18 11:53:35.589802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.402 [2024-11-18 11:53:35.589809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.402 [2024-11-18 11:53:35.589817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.402 [2024-11-18 11:53:35.988465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:38.402 [2024-11-18 11:53:35.989671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.402 [2024-11-18 11:53:35.989701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.402 [2024-11-18 11:53:35.989712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.402 [2024-11-18 11:53:35.989726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.402 [2024-11-18 11:53:35.989735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.402 [2024-11-18 11:53:35.989742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.402 [2024-11-18 11:53:35.989750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.402 [2024-11-18 11:53:35.989757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.402 [2024-11-18 11:53:35.989764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.402 [2024-11-18 11:53:35.989771] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.402 [2024-11-18 11:53:35.989779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.403 [2024-11-18 11:53:35.989785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.403 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:38.403 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.403 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.403 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.403 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.403 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.403 11:53:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.403 11:53:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.403 11:53:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.661 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.919 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.919 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.920 11:53:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.120 11:53:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.120 11:53:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.120 11:53:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.120 11:53:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.120 11:53:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.120 11:53:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:51.120 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.120 [2024-11-18 11:53:48.488656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.120 [2024-11-18 11:53:48.489821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.120 [2024-11-18 11:53:48.489856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.120 [2024-11-18 11:53:48.489866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.120 [2024-11-18 11:53:48.489883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.120 [2024-11-18 11:53:48.489891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.120 [2024-11-18 11:53:48.489899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.120 [2024-11-18 11:53:48.489906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.120 [2024-11-18 11:53:48.489914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.120 [2024-11-18 11:53:48.489920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.121 [2024-11-18 11:53:48.489928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.121 [2024-11-18 11:53:48.489935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.121 [2024-11-18 11:53:48.489942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.379 [2024-11-18 11:53:48.888655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.379 [2024-11-18 11:53:48.889821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.379 [2024-11-18 11:53:48.889850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.379 [2024-11-18 11:53:48.889862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.379 [2024-11-18 11:53:48.889874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.379 [2024-11-18 11:53:48.889882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.379 [2024-11-18 11:53:48.889889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.379 [2024-11-18 11:53:48.889897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.379 [2024-11-18 11:53:48.889903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.379 [2024-11-18 11:53:48.889911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.379 [2024-11-18 11:53:48.889918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.379 [2024-11-18 11:53:48.889925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.379 [2024-11-18 11:53:48.889931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.379 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:51.379 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.379 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.379 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.379 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.379 11:53:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.379 11:53:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.379 11:53:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.379 11:53:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.379 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:51.379 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.379 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.379 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.379 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.637 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.637 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.637 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.637 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.637 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.637 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.638 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.638 11:53:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.847 11:54:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.847 11:54:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.847 11:54:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.847 [2024-11-18 11:54:01.288844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:03.847 [2024-11-18 11:54:01.290198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.847 [2024-11-18 11:54:01.290234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.847 [2024-11-18 11:54:01.290246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.847 [2024-11-18 11:54:01.290263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.847 [2024-11-18 11:54:01.290270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.847 [2024-11-18 11:54:01.290280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.847 [2024-11-18 11:54:01.290287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.847 [2024-11-18 11:54:01.290296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.847 [2024-11-18 11:54:01.290303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.847 [2024-11-18 11:54:01.290311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.847 [2024-11-18 11:54:01.290318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.847 [2024-11-18 11:54:01.290326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.847 11:54:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.847 11:54:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.847 11:54:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:03.847 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:04.105 [2024-11-18 11:54:01.688844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.105 [2024-11-18 11:54:01.689985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.105 [2024-11-18 11:54:01.690015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.105 [2024-11-18 11:54:01.690026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.105 [2024-11-18 11:54:01.690040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.105 [2024-11-18 11:54:01.690049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.105 [2024-11-18 11:54:01.690056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.105 [2024-11-18 11:54:01.690068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.105 [2024-11-18 11:54:01.690074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.105 [2024-11-18 11:54:01.690084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.105 [2024-11-18 11:54:01.690090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.105 [2024-11-18 11:54:01.690098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.105 [2024-11-18 11:54:01.690105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.363 11:54:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.363 11:54:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 11:54:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.363 11:54:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.363 11:54:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.363 11:54:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.363 11:54:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.363 11:54:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.363 11:54:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.621 11:54:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.621 11:54:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.621 11:54:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.65 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.65 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:11:16.834 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:16.834 11:54:14 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:16.834 11:54:14 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.390 11:54:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.390 11:54:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.390 11:54:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.390 [2024-11-18 11:54:20.266212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.390 [2024-11-18 11:54:20.267103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.390 [2024-11-18 11:54:20.267138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.390 [2024-11-18 11:54:20.267149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.390 [2024-11-18 11:54:20.267166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.390 [2024-11-18 11:54:20.267173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.390 [2024-11-18 11:54:20.267181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.390 [2024-11-18 11:54:20.267188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.390 [2024-11-18 11:54:20.267198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.390 [2024-11-18 11:54:20.267204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.390 [2024-11-18 11:54:20.267213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.390 [2024-11-18 11:54:20.267219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.390 [2024-11-18 11:54:20.267229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.390 [2024-11-18 11:54:20.666205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.390 [2024-11-18 11:54:20.667068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.390 [2024-11-18 11:54:20.667096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.390 [2024-11-18 11:54:20.667107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.390 [2024-11-18 11:54:20.667118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.390 [2024-11-18 11:54:20.667127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.390 [2024-11-18 11:54:20.667133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.390 [2024-11-18 11:54:20.667142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.390 [2024-11-18 11:54:20.667149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.390 [2024-11-18 11:54:20.667157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.390 [2024-11-18 11:54:20.667164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.390 [2024-11-18 11:54:20.667172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.390 [2024-11-18 11:54:20.667178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.390 11:54:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.390 11:54:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.390 11:54:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.390 11:54:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.390 11:54:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.390 11:54:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.640 11:54:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.640 11:54:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 11:54:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.640 [2024-11-18 11:54:33.066419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:35.640 [2024-11-18 11:54:33.067384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.640 [2024-11-18 11:54:33.067416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.640 [2024-11-18 11:54:33.067427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.640 [2024-11-18 11:54:33.067444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.640 [2024-11-18 11:54:33.067451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.640 [2024-11-18 11:54:33.067459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.640 [2024-11-18 11:54:33.067466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.640 [2024-11-18 11:54:33.067474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.640 [2024-11-18 11:54:33.067481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.640 [2024-11-18 11:54:33.067489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.640 [2024-11-18 11:54:33.067495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.640 [2024-11-18 11:54:33.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.640 11:54:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.640 11:54:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 11:54:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:35.640 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.206 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:36.206 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.206 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.206 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.206 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.206 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.206 11:54:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.206 11:54:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.206 11:54:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.206 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:36.206 11:54:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.206 [2024-11-18 11:54:33.766424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.206 [2024-11-18 11:54:33.767285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.206 [2024-11-18 11:54:33.767328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.206 [2024-11-18 11:54:33.767339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.206 [2024-11-18 11:54:33.767352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.206 [2024-11-18 11:54:33.767363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.206 [2024-11-18 11:54:33.767370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.206 [2024-11-18 11:54:33.767379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.206 [2024-11-18 11:54:33.767386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.206 [2024-11-18 11:54:33.767394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.206 [2024-11-18 11:54:33.767402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.206 [2024-11-18 11:54:33.767409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.206 [2024-11-18 11:54:33.767416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.465 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:36.465 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.465 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.465 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.465 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.465 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.465 11:54:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.465 11:54:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.723 11:54:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.724 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.982 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.982 11:54:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.175 11:54:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.175 11:54:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.175 11:54:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.175 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.175 11:54:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.176 11:54:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.176 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.176 11:54:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.176 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:49.176 11:54:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.176 [2024-11-18 11:54:46.566640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:49.176 [2024-11-18 11:54:46.567513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.176 [2024-11-18 11:54:46.567546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.176 [2024-11-18 11:54:46.567557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.176 [2024-11-18 11:54:46.567573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.176 [2024-11-18 11:54:46.567590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.176 [2024-11-18 11:54:46.567599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.176 [2024-11-18 11:54:46.567607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.176 [2024-11-18 11:54:46.567617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.176 [2024-11-18 11:54:46.567623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.176 [2024-11-18 11:54:46.567631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.176 [2024-11-18 11:54:46.567638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.176 [2024-11-18 11:54:46.567646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.434 [2024-11-18 11:54:46.966632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:49.434 [2024-11-18 11:54:46.967508] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.434 [2024-11-18 11:54:46.967537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.434 [2024-11-18 11:54:46.967548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.434 [2024-11-18 11:54:46.967560] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.434 [2024-11-18 11:54:46.967569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.435 [2024-11-18 11:54:46.967576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.435 [2024-11-18 11:54:46.967595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.435 [2024-11-18 11:54:46.967602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.435 [2024-11-18 11:54:46.967610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.435 [2024-11-18 11:54:46.967617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.435 [2024-11-18 11:54:46.967627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.435 [2024-11-18 11:54:46.967633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.435 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:49.435 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.435 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.435 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.435 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.435 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.435 11:54:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.435 11:54:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.435 11:54:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.435 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.435 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.693 11:54:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.21 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.21 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:12:01.894 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:01.894 11:54:59 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67155 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67155 ']' 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67155 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67155 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:01.894 killing process with pid 67155 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67155' 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67155 00:12:01.894 11:54:59 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67155 00:12:03.272 11:55:00 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:03.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:03.844 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.844 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.844 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.844 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.844 00:12:03.844 real 2m29.346s 00:12:03.844 user 1m51.070s 00:12:03.844 sys 0m16.758s 00:12:03.844 11:55:01 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.844 ************************************ 00:12:03.844 END TEST sw_hotplug 00:12:03.844 ************************************ 00:12:03.844 11:55:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.107 11:55:01 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:04.107 11:55:01 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:04.107 11:55:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:04.107 11:55:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.107 11:55:01 -- common/autotest_common.sh@10 -- # set +x 00:12:04.107 ************************************ 00:12:04.107 START TEST nvme_xnvme 00:12:04.107 ************************************ 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:04.107 * Looking for test storage... 00:12:04.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:04.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.107 --rc genhtml_branch_coverage=1 00:12:04.107 --rc genhtml_function_coverage=1 00:12:04.107 --rc genhtml_legend=1 00:12:04.107 --rc geninfo_all_blocks=1 00:12:04.107 --rc geninfo_unexecuted_blocks=1 00:12:04.107 00:12:04.107 ' 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:04.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.107 --rc genhtml_branch_coverage=1 00:12:04.107 --rc genhtml_function_coverage=1 00:12:04.107 --rc genhtml_legend=1 00:12:04.107 --rc geninfo_all_blocks=1 00:12:04.107 --rc geninfo_unexecuted_blocks=1 00:12:04.107 00:12:04.107 ' 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:04.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.107 --rc genhtml_branch_coverage=1 00:12:04.107 --rc genhtml_function_coverage=1 00:12:04.107 --rc genhtml_legend=1 00:12:04.107 --rc geninfo_all_blocks=1 00:12:04.107 --rc geninfo_unexecuted_blocks=1 00:12:04.107 00:12:04.107 ' 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:04.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.107 --rc genhtml_branch_coverage=1 00:12:04.107 --rc genhtml_function_coverage=1 00:12:04.107 --rc genhtml_legend=1 00:12:04.107 --rc geninfo_all_blocks=1 00:12:04.107 --rc geninfo_unexecuted_blocks=1 00:12:04.107 00:12:04.107 ' 00:12:04.107 11:55:01 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.107 11:55:01 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.107 11:55:01 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.107 11:55:01 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.107 11:55:01 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.107 11:55:01 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:04.107 11:55:01 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.107 11:55:01 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.107 11:55:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.107 ************************************ 00:12:04.107 START TEST xnvme_to_malloc_dd_copy 00:12:04.107 ************************************ 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:04.107 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:04.108 11:55:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:04.370 { 00:12:04.370 "subsystems": [ 00:12:04.370 { 00:12:04.370 "subsystem": "bdev", 00:12:04.370 "config": [ 00:12:04.370 { 00:12:04.370 "params": { 00:12:04.370 "block_size": 512, 00:12:04.370 "num_blocks": 2097152, 00:12:04.370 "name": "malloc0" 00:12:04.370 }, 00:12:04.370 "method": "bdev_malloc_create" 00:12:04.370 }, 00:12:04.370 { 00:12:04.370 "params": { 00:12:04.370 "io_mechanism": "libaio", 00:12:04.370 "filename": "/dev/nullb0", 00:12:04.370 "name": "null0" 00:12:04.370 }, 00:12:04.370 "method": "bdev_xnvme_create" 00:12:04.370 }, 00:12:04.370 { 00:12:04.370 "method": "bdev_wait_for_examine" 00:12:04.370 } 00:12:04.370 ] 00:12:04.370 } 00:12:04.370 ] 00:12:04.370 } 00:12:04.370 [2024-11-18 11:55:01.842516] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:04.370 [2024-11-18 11:55:01.842678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68537 ] 00:12:04.370 [2024-11-18 11:55:02.008643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.631 [2024-11-18 11:55:02.114265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.546  [2024-11-18T11:55:05.271Z] Copying: 224/1024 [MB] (224 MBps) [2024-11-18T11:55:06.643Z] Copying: 448/1024 [MB] (223 MBps) [2024-11-18T11:55:07.209Z] Copying: 747/1024 [MB] (299 MBps) [2024-11-18T11:55:09.114Z] Copying: 1024/1024 [MB] (average 261 MBps) 00:12:11.413 00:12:11.413 11:55:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:11.413 11:55:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:11.413 11:55:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:11.413 11:55:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:11.413 { 00:12:11.413 "subsystems": [ 00:12:11.413 { 00:12:11.413 "subsystem": "bdev", 00:12:11.413 "config": [ 00:12:11.413 { 00:12:11.413 "params": { 00:12:11.413 "block_size": 512, 00:12:11.413 "num_blocks": 2097152, 00:12:11.413 "name": "malloc0" 00:12:11.413 }, 00:12:11.413 "method": "bdev_malloc_create" 00:12:11.413 }, 00:12:11.413 { 00:12:11.413 "params": { 00:12:11.413 "io_mechanism": "libaio", 00:12:11.413 "filename": "/dev/nullb0", 00:12:11.413 "name": "null0" 00:12:11.413 }, 00:12:11.413 "method": "bdev_xnvme_create" 00:12:11.413 }, 00:12:11.413 { 00:12:11.413 "method": "bdev_wait_for_examine" 00:12:11.413 } 00:12:11.413 ] 00:12:11.413 } 00:12:11.413 ] 00:12:11.413 } 00:12:11.670 [2024-11-18 11:55:09.113731] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:11.670 [2024-11-18 11:55:09.113852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68628 ] 00:12:11.670 [2024-11-18 11:55:09.271074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.670 [2024-11-18 11:55:09.354043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.568  [2024-11-18T11:55:12.205Z] Copying: 304/1024 [MB] (304 MBps) [2024-11-18T11:55:13.139Z] Copying: 609/1024 [MB] (305 MBps) [2024-11-18T11:55:13.705Z] Copying: 914/1024 [MB] (305 MBps) [2024-11-18T11:55:15.604Z] Copying: 1024/1024 [MB] (average 305 MBps) 00:12:17.903 00:12:17.903 11:55:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:17.903 11:55:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:17.903 11:55:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:17.903 11:55:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:17.903 11:55:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:17.903 11:55:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:17.903 { 00:12:17.903 "subsystems": [ 00:12:17.903 { 00:12:17.903 "subsystem": "bdev", 00:12:17.903 "config": [ 00:12:17.903 { 00:12:17.903 "params": { 00:12:17.903 "block_size": 512, 00:12:17.903 "num_blocks": 2097152, 00:12:17.903 "name": "malloc0" 00:12:17.903 }, 00:12:17.903 "method": "bdev_malloc_create" 00:12:17.903 }, 00:12:17.903 { 00:12:17.903 "params": { 00:12:17.903 "io_mechanism": "io_uring", 00:12:17.903 "filename": "/dev/nullb0", 00:12:17.903 "name": "null0" 00:12:17.903 }, 00:12:17.903 "method": "bdev_xnvme_create" 00:12:17.903 }, 00:12:17.903 { 00:12:17.903 "method": "bdev_wait_for_examine" 00:12:17.903 } 00:12:17.903 ] 00:12:17.903 } 00:12:17.903 ] 00:12:17.903 } 00:12:17.903 [2024-11-18 11:55:15.436686] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:17.903 [2024-11-18 11:55:15.436801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68704 ] 00:12:17.903 [2024-11-18 11:55:15.593440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.162 [2024-11-18 11:55:15.675081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.061  [2024-11-18T11:55:18.696Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-18T11:55:19.630Z] Copying: 622/1024 [MB] (311 MBps) [2024-11-18T11:55:19.888Z] Copying: 934/1024 [MB] (311 MBps) [2024-11-18T11:55:21.789Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:12:24.088 00:12:24.088 11:55:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:24.088 11:55:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:24.088 11:55:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:24.088 11:55:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.088 { 00:12:24.088 "subsystems": [ 00:12:24.088 { 00:12:24.088 "subsystem": "bdev", 00:12:24.088 "config": [ 00:12:24.088 { 00:12:24.088 "params": { 00:12:24.088 "block_size": 512, 00:12:24.088 "num_blocks": 2097152, 00:12:24.088 "name": "malloc0" 00:12:24.088 }, 00:12:24.088 "method": "bdev_malloc_create" 00:12:24.088 }, 00:12:24.088 { 00:12:24.088 "params": { 00:12:24.088 "io_mechanism": "io_uring", 00:12:24.088 "filename": "/dev/nullb0", 00:12:24.088 "name": "null0" 00:12:24.088 }, 00:12:24.088 "method": "bdev_xnvme_create" 00:12:24.088 }, 00:12:24.088 { 00:12:24.088 "method": "bdev_wait_for_examine" 00:12:24.088 } 00:12:24.088 ] 00:12:24.088 } 00:12:24.088 ] 00:12:24.088 } 00:12:24.088 [2024-11-18 11:55:21.636557] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:24.088 [2024-11-18 11:55:21.636676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68775 ] 00:12:24.347 [2024-11-18 11:55:21.790566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.347 [2024-11-18 11:55:21.868633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.247  [2024-11-18T11:55:24.884Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-18T11:55:25.818Z] Copying: 635/1024 [MB] (317 MBps) [2024-11-18T11:55:26.076Z] Copying: 952/1024 [MB] (316 MBps) [2024-11-18T11:55:27.981Z] Copying: 1024/1024 [MB] (average 317 MBps) 00:12:30.280 00:12:30.280 11:55:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:30.280 11:55:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:30.280 00:12:30.280 real 0m26.018s 00:12:30.280 user 0m22.806s 00:12:30.280 sys 0m2.688s 00:12:30.280 11:55:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:30.280 11:55:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:30.280 ************************************ 00:12:30.280 END TEST xnvme_to_malloc_dd_copy 00:12:30.280 ************************************ 00:12:30.280 11:55:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:30.280 11:55:27 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:30.280 11:55:27 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:30.280 11:55:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.280 ************************************ 00:12:30.280 START TEST xnvme_bdevperf 00:12:30.280 ************************************ 00:12:30.280 11:55:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:12:30.280 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:30.280 11:55:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:30.280 11:55:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:30.281 11:55:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:30.281 { 00:12:30.281 "subsystems": [ 00:12:30.281 { 00:12:30.281 "subsystem": "bdev", 00:12:30.281 "config": [ 00:12:30.281 { 00:12:30.281 "params": { 00:12:30.281 "io_mechanism": "libaio", 00:12:30.281 "filename": "/dev/nullb0", 00:12:30.281 "name": "null0" 00:12:30.281 }, 00:12:30.281 "method": "bdev_xnvme_create" 00:12:30.281 }, 00:12:30.281 { 00:12:30.281 "method": "bdev_wait_for_examine" 00:12:30.281 } 00:12:30.281 ] 00:12:30.281 } 00:12:30.281 ] 00:12:30.281 } 00:12:30.281 [2024-11-18 11:55:27.896530] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:30.281 [2024-11-18 11:55:27.896657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68874 ] 00:12:30.542 [2024-11-18 11:55:28.058749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.542 [2024-11-18 11:55:28.175882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.803 Running I/O for 5 seconds... 00:12:32.889 153024.00 IOPS, 597.75 MiB/s [2024-11-18T11:55:31.525Z] 171392.00 IOPS, 669.50 MiB/s [2024-11-18T11:55:32.898Z] 181354.67 IOPS, 708.42 MiB/s [2024-11-18T11:55:33.833Z] 186336.00 IOPS, 727.88 MiB/s 00:12:36.132 Latency(us) 00:12:36.132 [2024-11-18T11:55:33.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.132 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:36.132 null0 : 5.00 189359.15 739.68 0.00 0.00 335.58 106.34 2054.30 00:12:36.132 [2024-11-18T11:55:33.833Z] =================================================================================================================== 00:12:36.132 [2024-11-18T11:55:33.833Z] Total : 189359.15 739.68 0.00 0.00 335.58 106.34 2054.30 00:12:36.392 11:55:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:36.392 11:55:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:36.392 11:55:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:36.392 11:55:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:36.392 11:55:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:36.392 11:55:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:36.392 { 00:12:36.392 "subsystems": [ 00:12:36.392 { 00:12:36.392 "subsystem": "bdev", 00:12:36.392 "config": [ 00:12:36.392 { 00:12:36.392 "params": { 00:12:36.392 "io_mechanism": "io_uring", 00:12:36.392 "filename": "/dev/nullb0", 00:12:36.392 "name": "null0" 00:12:36.392 }, 00:12:36.392 "method": "bdev_xnvme_create" 00:12:36.392 }, 00:12:36.392 { 00:12:36.392 "method": "bdev_wait_for_examine" 00:12:36.392 } 00:12:36.392 ] 00:12:36.392 } 00:12:36.392 ] 00:12:36.392 } 00:12:36.651 [2024-11-18 11:55:34.099912] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:36.651 [2024-11-18 11:55:34.100029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68948 ] 00:12:36.651 [2024-11-18 11:55:34.256997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.651 [2024-11-18 11:55:34.332658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.908 Running I/O for 5 seconds... 00:12:39.213 229504.00 IOPS, 896.50 MiB/s [2024-11-18T11:55:37.847Z] 229280.00 IOPS, 895.62 MiB/s [2024-11-18T11:55:38.781Z] 229141.33 IOPS, 895.08 MiB/s [2024-11-18T11:55:39.715Z] 229104.00 IOPS, 894.94 MiB/s 00:12:42.014 Latency(us) 00:12:42.014 [2024-11-18T11:55:39.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.014 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:42.014 null0 : 5.00 229086.77 894.87 0.00 0.00 277.35 148.87 1512.37 00:12:42.014 [2024-11-18T11:55:39.715Z] =================================================================================================================== 00:12:42.014 [2024-11-18T11:55:39.715Z] Total : 229086.77 894.87 0.00 0.00 277.35 148.87 1512.37 00:12:42.581 11:55:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:42.581 11:55:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:42.581 00:12:42.581 real 0m12.286s 00:12:42.581 user 0m9.882s 00:12:42.581 sys 0m2.164s 00:12:42.581 11:55:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.581 11:55:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 ************************************ 00:12:42.581 END TEST xnvme_bdevperf 00:12:42.581 ************************************ 00:12:42.581 00:12:42.581 real 0m38.571s 00:12:42.581 user 0m32.798s 00:12:42.581 sys 0m4.976s 00:12:42.581 11:55:40 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.581 11:55:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 ************************************ 00:12:42.581 END TEST nvme_xnvme 00:12:42.581 ************************************ 00:12:42.581 11:55:40 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:42.581 11:55:40 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:42.581 11:55:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:42.581 11:55:40 -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 ************************************ 00:12:42.581 START TEST blockdev_xnvme 00:12:42.581 ************************************ 00:12:42.581 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:42.581 * Looking for test storage... 00:12:42.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:42.581 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:42.581 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:42.581 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.842 11:55:40 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:42.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.842 --rc genhtml_branch_coverage=1 00:12:42.842 --rc genhtml_function_coverage=1 00:12:42.842 --rc genhtml_legend=1 00:12:42.842 --rc geninfo_all_blocks=1 00:12:42.842 --rc geninfo_unexecuted_blocks=1 00:12:42.842 00:12:42.842 ' 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:42.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.842 --rc genhtml_branch_coverage=1 00:12:42.842 --rc genhtml_function_coverage=1 00:12:42.842 --rc genhtml_legend=1 00:12:42.842 --rc geninfo_all_blocks=1 00:12:42.842 --rc geninfo_unexecuted_blocks=1 00:12:42.842 00:12:42.842 ' 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:42.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.842 --rc genhtml_branch_coverage=1 00:12:42.842 --rc genhtml_function_coverage=1 00:12:42.842 --rc genhtml_legend=1 00:12:42.842 --rc geninfo_all_blocks=1 00:12:42.842 --rc geninfo_unexecuted_blocks=1 00:12:42.842 00:12:42.842 ' 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:42.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.842 --rc genhtml_branch_coverage=1 00:12:42.842 --rc genhtml_function_coverage=1 00:12:42.842 --rc genhtml_legend=1 00:12:42.842 --rc geninfo_all_blocks=1 00:12:42.842 --rc geninfo_unexecuted_blocks=1 00:12:42.842 00:12:42.842 ' 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69097 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69097 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69097 ']' 00:12:42.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:42.842 11:55:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.842 11:55:40 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:42.842 [2024-11-18 11:55:40.413474] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:42.842 [2024-11-18 11:55:40.413639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69097 ] 00:12:43.102 [2024-11-18 11:55:40.573175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.102 [2024-11-18 11:55:40.659483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.669 11:55:41 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:43.669 11:55:41 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:12:43.669 11:55:41 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:43.669 11:55:41 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:43.669 11:55:41 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:43.669 11:55:41 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:43.669 11:55:41 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:43.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:44.185 Waiting for block devices as requested 00:12:44.185 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:44.185 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:44.185 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:44.443 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.708 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.708 11:55:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.708 11:55:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:49.708 nvme0n1 00:12:49.708 nvme1n1 00:12:49.708 nvme2n1 00:12:49.708 nvme2n2 00:12:49.708 nvme2n3 00:12:49.708 nvme3n1 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.708 11:55:47 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.708 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:49.709 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "daf0adfe-65e6-48a7-81cb-6ec4cbf7ffc8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "daf0adfe-65e6-48a7-81cb-6ec4cbf7ffc8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1aa2ce9e-2658-4715-94fc-62f2f1c24cf8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1aa2ce9e-2658-4715-94fc-62f2f1c24cf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c1835c78-0962-436b-8176-a9a31d486b29"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1835c78-0962-436b-8176-a9a31d486b29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2f748ac0-771d-42dd-9e6a-880b17c40114"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2f748ac0-771d-42dd-9e6a-880b17c40114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "5c5c6188-a488-4254-9871-7d529a76f6f7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5c5c6188-a488-4254-9871-7d529a76f6f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5b0c5e09-9e20-40ed-b390-333bcba89216"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5b0c5e09-9e20-40ed-b390-333bcba89216",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:49.709 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:49.709 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:49.709 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:49.709 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:49.709 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69097 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69097 ']' 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69097 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69097 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69097' 00:12:49.709 killing process with pid 69097 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69097 00:12:49.709 11:55:47 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69097 00:12:50.646 11:55:48 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:50.646 11:55:48 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:50.646 11:55:48 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:50.646 11:55:48 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:50.646 11:55:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.646 ************************************ 00:12:50.646 START TEST bdev_hello_world 00:12:50.646 ************************************ 00:12:50.646 11:55:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:50.905 [2024-11-18 11:55:48.377266] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:50.905 [2024-11-18 11:55:48.377394] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69445 ] 00:12:50.905 [2024-11-18 11:55:48.533457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.163 [2024-11-18 11:55:48.609167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.422 [2024-11-18 11:55:48.890356] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:51.422 [2024-11-18 11:55:48.890395] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:51.422 [2024-11-18 11:55:48.890407] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:51.422 [2024-11-18 11:55:48.891863] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:51.422 [2024-11-18 11:55:48.892194] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:51.422 [2024-11-18 11:55:48.892222] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:51.422 [2024-11-18 11:55:48.892470] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:51.422 00:12:51.422 [2024-11-18 11:55:48.892498] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:51.992 00:12:51.992 real 0m1.120s 00:12:51.992 user 0m0.861s 00:12:51.992 sys 0m0.149s 00:12:51.992 11:55:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:51.992 ************************************ 00:12:51.992 END TEST bdev_hello_world 00:12:51.992 ************************************ 00:12:51.992 11:55:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:51.992 11:55:49 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:51.992 11:55:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:51.992 11:55:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:51.992 11:55:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.992 ************************************ 00:12:51.992 START TEST bdev_bounds 00:12:51.992 ************************************ 00:12:51.992 Process bdevio pid: 69482 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69482 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69482' 00:12:51.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69482 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69482 ']' 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:51.992 11:55:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:51.992 [2024-11-18 11:55:49.546959] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:51.992 [2024-11-18 11:55:49.547050] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69482 ] 00:12:52.250 [2024-11-18 11:55:49.697046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.250 [2024-11-18 11:55:49.776378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.250 [2024-11-18 11:55:49.776542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.250 [2024-11-18 11:55:49.776684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.815 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.815 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:12:52.815 11:55:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:52.815 I/O targets: 00:12:52.815 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:52.815 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:52.815 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:52.815 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:52.815 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:52.815 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:52.815 00:12:52.815 00:12:52.815 CUnit - A unit testing framework for C - Version 2.1-3 00:12:52.815 http://cunit.sourceforge.net/ 00:12:52.815 00:12:52.815 00:12:52.815 Suite: bdevio tests on: nvme3n1 00:12:52.815 Test: blockdev write read block ...passed 00:12:52.815 Test: blockdev write zeroes read block ...passed 00:12:52.815 Test: blockdev write zeroes read no split ...passed 00:12:52.815 Test: blockdev write zeroes read split ...passed 00:12:53.073 Test: blockdev write zeroes read split partial ...passed 00:12:53.073 Test: blockdev reset ...passed 00:12:53.073 Test: blockdev write read 8 blocks ...passed 00:12:53.073 Test: blockdev write read size > 128k ...passed 00:12:53.073 Test: blockdev write read invalid size ...passed 00:12:53.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.073 Test: blockdev write read max offset ...passed 00:12:53.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.073 Test: blockdev writev readv 8 blocks ...passed 00:12:53.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.073 Test: blockdev writev readv block ...passed 00:12:53.073 Test: blockdev writev readv size > 128k ...passed 00:12:53.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.073 Test: blockdev comparev and writev ...passed 00:12:53.073 Test: blockdev nvme passthru rw ...passed 00:12:53.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:53.073 Test: blockdev nvme admin passthru ...passed 00:12:53.073 Test: blockdev copy ...passed 00:12:53.073 Suite: bdevio tests on: nvme2n3 00:12:53.073 Test: blockdev write read block ...passed 00:12:53.073 Test: blockdev write zeroes read block ...passed 00:12:53.073 Test: blockdev write zeroes read no split ...passed 00:12:53.073 Test: blockdev write zeroes read split ...passed 00:12:53.073 Test: blockdev write zeroes read split partial ...passed 00:12:53.073 Test: blockdev reset ...passed 00:12:53.073 Test: blockdev write read 8 blocks ...passed 00:12:53.073 Test: blockdev write read size > 128k ...passed 00:12:53.073 Test: blockdev write read invalid size ...passed 00:12:53.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.073 Test: blockdev write read max offset ...passed 00:12:53.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.073 Test: blockdev writev readv 8 blocks ...passed 00:12:53.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.073 Test: blockdev writev readv block ...passed 00:12:53.073 Test: blockdev writev readv size > 128k ...passed 00:12:53.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.073 Test: blockdev comparev and writev ...passed 00:12:53.073 Test: blockdev nvme passthru rw ...passed 00:12:53.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:53.073 Test: blockdev nvme admin passthru ...passed 00:12:53.073 Test: blockdev copy ...passed 00:12:53.073 Suite: bdevio tests on: nvme2n2 00:12:53.073 Test: blockdev write read block ...passed 00:12:53.073 Test: blockdev write zeroes read block ...passed 00:12:53.073 Test: blockdev write zeroes read no split ...passed 00:12:53.073 Test: blockdev write zeroes read split ...passed 00:12:53.073 Test: blockdev write zeroes read split partial ...passed 00:12:53.073 Test: blockdev reset ...passed 00:12:53.073 Test: blockdev write read 8 blocks ...passed 00:12:53.073 Test: blockdev write read size > 128k ...passed 00:12:53.073 Test: blockdev write read invalid size ...passed 00:12:53.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.073 Test: blockdev write read max offset ...passed 00:12:53.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.073 Test: blockdev writev readv 8 blocks ...passed 00:12:53.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.073 Test: blockdev writev readv block ...passed 00:12:53.073 Test: blockdev writev readv size > 128k ...passed 00:12:53.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.073 Test: blockdev comparev and writev ...passed 00:12:53.073 Test: blockdev nvme passthru rw ...passed 00:12:53.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:53.073 Test: blockdev nvme admin passthru ...passed 00:12:53.073 Test: blockdev copy ...passed 00:12:53.073 Suite: bdevio tests on: nvme2n1 00:12:53.073 Test: blockdev write read block ...passed 00:12:53.073 Test: blockdev write zeroes read block ...passed 00:12:53.073 Test: blockdev write zeroes read no split ...passed 00:12:53.073 Test: blockdev write zeroes read split ...passed 00:12:53.073 Test: blockdev write zeroes read split partial ...passed 00:12:53.073 Test: blockdev reset ...passed 00:12:53.073 Test: blockdev write read 8 blocks ...passed 00:12:53.073 Test: blockdev write read size > 128k ...passed 00:12:53.073 Test: blockdev write read invalid size ...passed 00:12:53.073 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.073 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.073 Test: blockdev write read max offset ...passed 00:12:53.073 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.073 Test: blockdev writev readv 8 blocks ...passed 00:12:53.073 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.073 Test: blockdev writev readv block ...passed 00:12:53.073 Test: blockdev writev readv size > 128k ...passed 00:12:53.073 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.073 Test: blockdev comparev and writev ...passed 00:12:53.073 Test: blockdev nvme passthru rw ...passed 00:12:53.073 Test: blockdev nvme passthru vendor specific ...passed 00:12:53.073 Test: blockdev nvme admin passthru ...passed 00:12:53.073 Test: blockdev copy ...passed 00:12:53.073 Suite: bdevio tests on: nvme1n1 00:12:53.073 Test: blockdev write read block ...passed 00:12:53.073 Test: blockdev write zeroes read block ...passed 00:12:53.073 Test: blockdev write zeroes read no split ...passed 00:12:53.074 Test: blockdev write zeroes read split ...passed 00:12:53.074 Test: blockdev write zeroes read split partial ...passed 00:12:53.074 Test: blockdev reset ...passed 00:12:53.074 Test: blockdev write read 8 blocks ...passed 00:12:53.074 Test: blockdev write read size > 128k ...passed 00:12:53.074 Test: blockdev write read invalid size ...passed 00:12:53.074 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.074 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.074 Test: blockdev write read max offset ...passed 00:12:53.074 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.074 Test: blockdev writev readv 8 blocks ...passed 00:12:53.074 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.074 Test: blockdev writev readv block ...passed 00:12:53.074 Test: blockdev writev readv size > 128k ...passed 00:12:53.074 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.074 Test: blockdev comparev and writev ...passed 00:12:53.074 Test: blockdev nvme passthru rw ...passed 00:12:53.074 Test: blockdev nvme passthru vendor specific ...passed 00:12:53.074 Test: blockdev nvme admin passthru ...passed 00:12:53.074 Test: blockdev copy ...passed 00:12:53.074 Suite: bdevio tests on: nvme0n1 00:12:53.074 Test: blockdev write read block ...passed 00:12:53.074 Test: blockdev write zeroes read block ...passed 00:12:53.074 Test: blockdev write zeroes read no split ...passed 00:12:53.074 Test: blockdev write zeroes read split ...passed 00:12:53.332 Test: blockdev write zeroes read split partial ...passed 00:12:53.332 Test: blockdev reset ...passed 00:12:53.332 Test: blockdev write read 8 blocks ...passed 00:12:53.332 Test: blockdev write read size > 128k ...passed 00:12:53.332 Test: blockdev write read invalid size ...passed 00:12:53.332 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:53.332 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:53.332 Test: blockdev write read max offset ...passed 00:12:53.332 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:53.332 Test: blockdev writev readv 8 blocks ...passed 00:12:53.332 Test: blockdev writev readv 30 x 1block ...passed 00:12:53.332 Test: blockdev writev readv block ...passed 00:12:53.332 Test: blockdev writev readv size > 128k ...passed 00:12:53.332 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:53.332 Test: blockdev comparev and writev ...passed 00:12:53.332 Test: blockdev nvme passthru rw ...passed 00:12:53.332 Test: blockdev nvme passthru vendor specific ...passed 00:12:53.332 Test: blockdev nvme admin passthru ...passed 00:12:53.332 Test: blockdev copy ...passed 00:12:53.332 00:12:53.332 Run Summary: Type Total Ran Passed Failed Inactive 00:12:53.332 suites 6 6 n/a 0 0 00:12:53.332 tests 138 138 138 0 0 00:12:53.332 asserts 780 780 780 0 n/a 00:12:53.332 00:12:53.332 Elapsed time = 0.838 seconds 00:12:53.332 0 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69482 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69482 ']' 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69482 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69482 00:12:53.332 killing process with pid 69482 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69482' 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69482 00:12:53.332 11:55:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69482 00:12:53.899 ************************************ 00:12:53.899 END TEST bdev_bounds 00:12:53.899 ************************************ 00:12:53.899 11:55:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:53.899 00:12:53.899 real 0m1.894s 00:12:53.899 user 0m4.855s 00:12:53.899 sys 0m0.249s 00:12:53.899 11:55:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:53.899 11:55:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:53.899 11:55:51 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:53.899 11:55:51 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:53.899 11:55:51 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:53.899 11:55:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.899 ************************************ 00:12:53.899 START TEST bdev_nbd 00:12:53.899 ************************************ 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:53.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69530 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69530 /var/tmp/spdk-nbd.sock 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69530 ']' 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:53.899 11:55:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:53.900 11:55:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:53.900 11:55:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:53.900 11:55:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:53.900 11:55:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:53.900 [2024-11-18 11:55:51.503573] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:12:53.900 [2024-11-18 11:55:51.503778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.158 [2024-11-18 11:55:51.653624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.158 [2024-11-18 11:55:51.729421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:54.724 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:54.725 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:54.725 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.983 1+0 records in 00:12:54.983 1+0 records out 00:12:54.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314601 s, 13.0 MB/s 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:54.983 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:55.241 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:55.241 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:55.241 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:55.241 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:55.241 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:55.241 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.242 1+0 records in 00:12:55.242 1+0 records out 00:12:55.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586866 s, 7.0 MB/s 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:55.242 11:55:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.502 1+0 records in 00:12:55.502 1+0 records out 00:12:55.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595575 s, 6.9 MB/s 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:55.502 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.763 1+0 records in 00:12:55.763 1+0 records out 00:12:55.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000983035 s, 4.2 MB/s 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:55.763 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.024 1+0 records in 00:12:56.024 1+0 records out 00:12:56.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000917404 s, 4.5 MB/s 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.024 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.285 1+0 records in 00:12:56.285 1+0 records out 00:12:56.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117444 s, 3.5 MB/s 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd0", 00:12:56.285 "bdev_name": "nvme0n1" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd1", 00:12:56.285 "bdev_name": "nvme1n1" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd2", 00:12:56.285 "bdev_name": "nvme2n1" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd3", 00:12:56.285 "bdev_name": "nvme2n2" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd4", 00:12:56.285 "bdev_name": "nvme2n3" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd5", 00:12:56.285 "bdev_name": "nvme3n1" 00:12:56.285 } 00:12:56.285 ]' 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd0", 00:12:56.285 "bdev_name": "nvme0n1" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd1", 00:12:56.285 "bdev_name": "nvme1n1" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd2", 00:12:56.285 "bdev_name": "nvme2n1" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd3", 00:12:56.285 "bdev_name": "nvme2n2" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd4", 00:12:56.285 "bdev_name": "nvme2n3" 00:12:56.285 }, 00:12:56.285 { 00:12:56.285 "nbd_device": "/dev/nbd5", 00:12:56.285 "bdev_name": "nvme3n1" 00:12:56.285 } 00:12:56.285 ]' 00:12:56.285 11:55:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.545 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.805 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.066 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.327 11:55:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.589 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.848 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:58.105 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.106 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:58.106 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.106 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:58.106 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.106 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.106 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:58.364 /dev/nbd0 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.364 1+0 records in 00:12:58.364 1+0 records out 00:12:58.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584257 s, 7.0 MB/s 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.364 11:55:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:58.364 /dev/nbd1 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.623 1+0 records in 00:12:58.623 1+0 records out 00:12:58.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488098 s, 8.4 MB/s 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:58.623 /dev/nbd10 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.623 1+0 records in 00:12:58.623 1+0 records out 00:12:58.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498945 s, 8.2 MB/s 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.623 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:58.882 /dev/nbd11 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.882 1+0 records in 00:12:58.882 1+0 records out 00:12:58.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356523 s, 11.5 MB/s 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:58.882 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:59.140 /dev/nbd12 00:12:59.140 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:59.140 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.141 1+0 records in 00:12:59.141 1+0 records out 00:12:59.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588767 s, 7.0 MB/s 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.141 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:59.430 /dev/nbd13 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.430 1+0 records in 00:12:59.430 1+0 records out 00:12:59.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388403 s, 10.5 MB/s 00:12:59.430 11:55:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.430 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:59.708 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd0", 00:12:59.708 "bdev_name": "nvme0n1" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd1", 00:12:59.708 "bdev_name": "nvme1n1" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd10", 00:12:59.708 "bdev_name": "nvme2n1" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd11", 00:12:59.708 "bdev_name": "nvme2n2" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd12", 00:12:59.708 "bdev_name": "nvme2n3" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd13", 00:12:59.708 "bdev_name": "nvme3n1" 00:12:59.708 } 00:12:59.708 ]' 00:12:59.708 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd0", 00:12:59.708 "bdev_name": "nvme0n1" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd1", 00:12:59.708 "bdev_name": "nvme1n1" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd10", 00:12:59.708 "bdev_name": "nvme2n1" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd11", 00:12:59.708 "bdev_name": "nvme2n2" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd12", 00:12:59.708 "bdev_name": "nvme2n3" 00:12:59.708 }, 00:12:59.708 { 00:12:59.708 "nbd_device": "/dev/nbd13", 00:12:59.708 "bdev_name": "nvme3n1" 00:12:59.708 } 00:12:59.708 ]' 00:12:59.708 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:59.709 /dev/nbd1 00:12:59.709 /dev/nbd10 00:12:59.709 /dev/nbd11 00:12:59.709 /dev/nbd12 00:12:59.709 /dev/nbd13' 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:59.709 /dev/nbd1 00:12:59.709 /dev/nbd10 00:12:59.709 /dev/nbd11 00:12:59.709 /dev/nbd12 00:12:59.709 /dev/nbd13' 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:59.709 256+0 records in 00:12:59.709 256+0 records out 00:12:59.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00768314 s, 136 MB/s 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:59.709 256+0 records in 00:12:59.709 256+0 records out 00:12:59.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0523878 s, 20.0 MB/s 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:59.709 256+0 records in 00:12:59.709 256+0 records out 00:12:59.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0617092 s, 17.0 MB/s 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.709 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:59.967 256+0 records in 00:12:59.967 256+0 records out 00:12:59.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0514398 s, 20.4 MB/s 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:59.967 256+0 records in 00:12:59.967 256+0 records out 00:12:59.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.04824 s, 21.7 MB/s 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:59.967 256+0 records in 00:12:59.967 256+0 records out 00:12:59.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0475182 s, 22.1 MB/s 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:59.967 256+0 records in 00:12:59.967 256+0 records out 00:12:59.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0475446 s, 22.1 MB/s 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.967 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.968 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.968 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:59.968 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.968 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.226 11:55:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.485 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.743 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.002 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.260 11:55:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:01.519 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:01.776 malloc_lvol_verify 00:13:01.776 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:02.035 5b5093cd-08fc-4582-9286-7a54ec649e20 00:13:02.035 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:02.035 f6bd2dbb-c95c-4a1e-9941-46ea851ee816 00:13:02.035 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:02.293 /dev/nbd0 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:02.293 mke2fs 1.47.0 (5-Feb-2023) 00:13:02.293 Discarding device blocks: 0/4096 done 00:13:02.293 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:02.293 00:13:02.293 Allocating group tables: 0/1 done 00:13:02.293 Writing inode tables: 0/1 done 00:13:02.293 Creating journal (1024 blocks): done 00:13:02.293 Writing superblocks and filesystem accounting information: 0/1 done 00:13:02.293 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.293 11:55:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69530 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69530 ']' 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69530 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69530 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:02.551 killing process with pid 69530 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69530' 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69530 00:13:02.551 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69530 00:13:03.121 11:56:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:03.121 00:13:03.121 real 0m9.311s 00:13:03.121 user 0m13.455s 00:13:03.121 sys 0m3.105s 00:13:03.121 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.121 ************************************ 00:13:03.121 END TEST bdev_nbd 00:13:03.121 ************************************ 00:13:03.121 11:56:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:03.121 11:56:00 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:03.121 11:56:00 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:03.121 11:56:00 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:03.121 11:56:00 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:03.121 11:56:00 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:03.121 11:56:00 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.121 11:56:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.121 ************************************ 00:13:03.121 START TEST bdev_fio 00:13:03.121 ************************************ 00:13:03.121 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:13:03.121 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:03.383 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:03.383 ************************************ 00:13:03.383 START TEST bdev_fio_rw_verify 00:13:03.383 ************************************ 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:03.383 11:56:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:03.384 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:03.384 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:03.384 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:03.384 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:03.384 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:03.384 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:03.384 fio-3.35 00:13:03.384 Starting 6 threads 00:13:15.602 00:13:15.602 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69920: Mon Nov 18 11:56:11 2024 00:13:15.602 read: IOPS=30.9k, BW=121MiB/s (127MB/s)(1207MiB/10002msec) 00:13:15.602 slat (usec): min=2, max=2724, avg= 4.79, stdev=10.22 00:13:15.602 clat (usec): min=71, max=6970, avg=591.78, stdev=568.40 00:13:15.602 lat (usec): min=75, max=6992, avg=596.57, stdev=568.91 00:13:15.602 clat percentiles (usec): 00:13:15.602 | 50.000th=[ 404], 99.000th=[ 2868], 99.900th=[ 4228], 99.990th=[ 5538], 00:13:15.602 | 99.999th=[ 6915] 00:13:15.602 write: IOPS=31.3k, BW=122MiB/s (128MB/s)(1221MiB/10002msec); 0 zone resets 00:13:15.602 slat (usec): min=10, max=6602, avg=25.37, stdev=85.84 00:13:15.602 clat (usec): min=66, max=8378, avg=720.06, stdev=653.14 00:13:15.602 lat (usec): min=84, max=8407, avg=745.43, stdev=666.64 00:13:15.602 clat percentiles (usec): 00:13:15.602 | 50.000th=[ 490], 99.000th=[ 3261], 99.900th=[ 4621], 99.990th=[ 6325], 00:13:15.602 | 99.999th=[ 8356] 00:13:15.602 bw ( KiB/s): min=57519, max=195273, per=100.00%, avg=128710.21, stdev=7983.39, samples=114 00:13:15.602 iops : min=14379, max=48818, avg=32176.79, stdev=1995.86, samples=114 00:13:15.602 lat (usec) : 100=0.09%, 250=15.33%, 500=42.67%, 750=18.78%, 1000=6.38% 00:13:15.602 lat (msec) : 2=11.66%, 4=4.86%, 10=0.23% 00:13:15.602 cpu : usr=48.31%, sys=30.13%, ctx=8383, majf=0, minf=26019 00:13:15.602 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:15.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.602 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.602 issued rwts: total=309114,312585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.602 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:15.602 00:13:15.602 Run status group 0 (all jobs): 00:13:15.602 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=1207MiB (1266MB), run=10002-10002msec 00:13:15.602 WRITE: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=1221MiB (1280MB), run=10002-10002msec 00:13:15.602 ----------------------------------------------------- 00:13:15.602 Suppressions used: 00:13:15.602 count bytes template 00:13:15.602 6 48 /usr/src/fio/parse.c 00:13:15.602 3246 311616 /usr/src/fio/iolog.c 00:13:15.602 1 8 libtcmalloc_minimal.so 00:13:15.602 1 904 libcrypto.so 00:13:15.602 ----------------------------------------------------- 00:13:15.602 00:13:15.602 00:13:15.602 real 0m11.871s 00:13:15.602 user 0m30.436s 00:13:15.602 sys 0m18.362s 00:13:15.602 ************************************ 00:13:15.602 END TEST bdev_fio_rw_verify 00:13:15.602 ************************************ 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "daf0adfe-65e6-48a7-81cb-6ec4cbf7ffc8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "daf0adfe-65e6-48a7-81cb-6ec4cbf7ffc8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1aa2ce9e-2658-4715-94fc-62f2f1c24cf8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1aa2ce9e-2658-4715-94fc-62f2f1c24cf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c1835c78-0962-436b-8176-a9a31d486b29"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1835c78-0962-436b-8176-a9a31d486b29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2f748ac0-771d-42dd-9e6a-880b17c40114"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2f748ac0-771d-42dd-9e6a-880b17c40114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "5c5c6188-a488-4254-9871-7d529a76f6f7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5c5c6188-a488-4254-9871-7d529a76f6f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5b0c5e09-9e20-40ed-b390-333bcba89216"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5b0c5e09-9e20-40ed-b390-333bcba89216",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.602 /home/vagrant/spdk_repo/spdk 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:15.602 00:13:15.602 real 0m12.037s 00:13:15.602 user 0m30.510s 00:13:15.602 sys 0m18.434s 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.602 11:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:15.602 ************************************ 00:13:15.602 END TEST bdev_fio 00:13:15.602 ************************************ 00:13:15.602 11:56:12 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:15.602 11:56:12 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:15.602 11:56:12 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:15.602 11:56:12 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.602 11:56:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.602 ************************************ 00:13:15.602 START TEST bdev_verify 00:13:15.602 ************************************ 00:13:15.602 11:56:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:15.602 [2024-11-18 11:56:12.989887] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:15.602 [2024-11-18 11:56:12.990618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70101 ] 00:13:15.602 [2024-11-18 11:56:13.155245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:15.602 [2024-11-18 11:56:13.279763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.602 [2024-11-18 11:56:13.279856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.172 Running I/O for 5 seconds... 00:13:18.498 24128.00 IOPS, 94.25 MiB/s [2024-11-18T11:56:17.140Z] 23824.00 IOPS, 93.06 MiB/s [2024-11-18T11:56:18.084Z] 24256.00 IOPS, 94.75 MiB/s [2024-11-18T11:56:19.027Z] 23504.00 IOPS, 91.81 MiB/s [2024-11-18T11:56:19.027Z] 23456.00 IOPS, 91.62 MiB/s 00:13:21.326 Latency(us) 00:13:21.326 [2024-11-18T11:56:19.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.326 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.326 Verification LBA range: start 0x0 length 0xa0000 00:13:21.326 nvme0n1 : 5.06 1720.34 6.72 0.00 0.00 74275.49 8166.79 84289.38 00:13:21.326 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.326 Verification LBA range: start 0xa0000 length 0xa0000 00:13:21.326 nvme0n1 : 5.05 1367.96 5.34 0.00 0.00 93417.67 15829.46 104857.60 00:13:21.326 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.326 Verification LBA range: start 0x0 length 0xbd0bd 00:13:21.326 nvme1n1 : 5.05 2650.25 10.35 0.00 0.00 48095.35 3856.54 53638.70 00:13:21.326 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.326 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:21.326 nvme1n1 : 5.06 2650.04 10.35 0.00 0.00 48069.64 4839.58 59284.87 00:13:21.326 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.326 Verification LBA range: start 0x0 length 0x80000 00:13:21.326 nvme2n1 : 5.07 1869.69 7.30 0.00 0.00 67936.50 5268.09 61301.37 00:13:21.326 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.326 Verification LBA range: start 0x80000 length 0x80000 00:13:21.327 nvme2n1 : 5.04 1855.47 7.25 0.00 0.00 68394.01 6377.16 59284.87 00:13:21.327 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.327 Verification LBA range: start 0x0 length 0x80000 00:13:21.327 nvme2n2 : 5.05 1849.63 7.23 0.00 0.00 68512.42 9175.04 59284.87 00:13:21.327 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.327 Verification LBA range: start 0x80000 length 0x80000 00:13:21.327 nvme2n2 : 5.07 1842.41 7.20 0.00 0.00 68722.88 7511.43 65737.65 00:13:21.327 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.327 Verification LBA range: start 0x0 length 0x80000 00:13:21.327 nvme2n3 : 5.07 1843.66 7.20 0.00 0.00 68619.41 11897.30 57671.68 00:13:21.327 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.327 Verification LBA range: start 0x80000 length 0x80000 00:13:21.327 nvme2n3 : 5.07 1841.89 7.19 0.00 0.00 68618.56 7965.14 63317.86 00:13:21.327 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:21.327 Verification LBA range: start 0x0 length 0x20000 00:13:21.327 nvme3n1 : 5.08 1864.51 7.28 0.00 0.00 67742.03 3377.62 60494.77 00:13:21.327 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:21.327 Verification LBA range: start 0x20000 length 0x20000 00:13:21.327 nvme3n1 : 5.08 1839.96 7.19 0.00 0.00 68591.15 3680.10 62107.96 00:13:21.327 [2024-11-18T11:56:19.028Z] =================================================================================================================== 00:13:21.327 [2024-11-18T11:56:19.028Z] Total : 23195.82 90.61 0.00 0.00 65667.18 3377.62 104857.60 00:13:22.283 00:13:22.283 real 0m6.706s 00:13:22.283 user 0m10.654s 00:13:22.283 sys 0m1.611s 00:13:22.283 ************************************ 00:13:22.283 END TEST bdev_verify 00:13:22.283 ************************************ 00:13:22.283 11:56:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:22.283 11:56:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:22.283 11:56:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:22.283 11:56:19 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:22.283 11:56:19 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:22.283 11:56:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.283 ************************************ 00:13:22.283 START TEST bdev_verify_big_io 00:13:22.283 ************************************ 00:13:22.283 11:56:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:22.283 [2024-11-18 11:56:19.766920] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:22.283 [2024-11-18 11:56:19.767069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70200 ] 00:13:22.283 [2024-11-18 11:56:19.928570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:22.544 [2024-11-18 11:56:20.056397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.544 [2024-11-18 11:56:20.056506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.115 Running I/O for 5 seconds... 00:13:29.024 1660.00 IOPS, 103.75 MiB/s [2024-11-18T11:56:26.725Z] 2933.00 IOPS, 183.31 MiB/s 00:13:29.024 Latency(us) 00:13:29.024 [2024-11-18T11:56:26.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.024 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x0 length 0xa000 00:13:29.024 nvme0n1 : 5.75 164.27 10.27 0.00 0.00 747927.44 10889.06 1058255.16 00:13:29.024 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0xa000 length 0xa000 00:13:29.024 nvme0n1 : 5.95 72.23 4.51 0.00 0.00 1715774.22 192776.66 3006993.33 00:13:29.024 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x0 length 0xbd0b 00:13:29.024 nvme1n1 : 5.62 156.56 9.78 0.00 0.00 770315.19 129055.51 1458327.24 00:13:29.024 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:29.024 nvme1n1 : 5.96 118.22 7.39 0.00 0.00 1016686.49 14317.10 1174405.12 00:13:29.024 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x0 length 0x8000 00:13:29.024 nvme2n1 : 5.80 162.69 10.17 0.00 0.00 722301.99 53235.40 1155046.79 00:13:29.024 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x8000 length 0x8000 00:13:29.024 nvme2n1 : 5.96 96.69 6.04 0.00 0.00 1205445.80 151640.22 1226027.32 00:13:29.024 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x0 length 0x8000 00:13:29.024 nvme2n2 : 5.80 140.74 8.80 0.00 0.00 797882.51 163739.18 1742249.35 00:13:29.024 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x8000 length 0x8000 00:13:29.024 nvme2n2 : 5.96 85.87 5.37 0.00 0.00 1319625.65 35691.91 1664816.05 00:13:29.024 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x0 length 0x8000 00:13:29.024 nvme2n3 : 5.85 161.38 10.09 0.00 0.00 681066.43 49202.41 1561571.64 00:13:29.024 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x8000 length 0x8000 00:13:29.024 nvme2n3 : 5.96 99.26 6.20 0.00 0.00 1113292.12 32667.18 2000360.37 00:13:29.024 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x0 length 0x2000 00:13:29.024 nvme3n1 : 5.92 226.95 14.18 0.00 0.00 470559.37 1556.48 622692.82 00:13:29.024 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:29.024 Verification LBA range: start 0x2000 length 0x2000 00:13:29.024 nvme3n1 : 5.97 101.22 6.33 0.00 0.00 1058260.05 5217.67 2064888.12 00:13:29.024 [2024-11-18T11:56:26.725Z] =================================================================================================================== 00:13:29.024 [2024-11-18T11:56:26.725Z] Total : 1586.08 99.13 0.00 0.00 873409.89 1556.48 3006993.33 00:13:29.964 00:13:29.964 real 0m7.615s 00:13:29.964 user 0m13.969s 00:13:29.964 sys 0m0.439s 00:13:29.964 11:56:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:29.964 ************************************ 00:13:29.964 END TEST bdev_verify_big_io 00:13:29.964 ************************************ 00:13:29.964 11:56:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.964 11:56:27 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.964 11:56:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:29.964 11:56:27 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:29.964 11:56:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.964 ************************************ 00:13:29.964 START TEST bdev_write_zeroes 00:13:29.964 ************************************ 00:13:29.964 11:56:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.964 [2024-11-18 11:56:27.431512] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:29.964 [2024-11-18 11:56:27.431624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70310 ] 00:13:29.964 [2024-11-18 11:56:27.581931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.225 [2024-11-18 11:56:27.674330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.483 Running I/O for 1 seconds... 00:13:31.417 80032.00 IOPS, 312.62 MiB/s 00:13:31.417 Latency(us) 00:13:31.417 [2024-11-18T11:56:29.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.417 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.417 nvme0n1 : 1.02 11674.96 45.61 0.00 0.00 10953.07 5696.59 21778.12 00:13:31.417 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.417 nvme1n1 : 1.03 20815.93 81.31 0.00 0.00 6116.48 4032.98 20366.57 00:13:31.417 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.417 nvme2n1 : 1.02 11643.17 45.48 0.00 0.00 10970.38 5721.80 20366.57 00:13:31.417 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.417 nvme2n2 : 1.02 11628.93 45.43 0.00 0.00 10933.04 4915.20 20769.87 00:13:31.417 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.417 nvme2n3 : 1.02 11615.86 45.37 0.00 0.00 10935.22 4990.82 21173.17 00:13:31.417 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:31.417 nvme3n1 : 1.03 11602.83 45.32 0.00 0.00 10938.52 5066.44 21475.64 00:13:31.417 [2024-11-18T11:56:29.118Z] =================================================================================================================== 00:13:31.417 [2024-11-18T11:56:29.118Z] Total : 78981.67 308.52 0.00 0.00 9669.88 4032.98 21778.12 00:13:32.358 00:13:32.358 real 0m2.429s 00:13:32.358 user 0m1.648s 00:13:32.358 sys 0m0.605s 00:13:32.358 11:56:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:32.358 11:56:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:32.358 ************************************ 00:13:32.358 END TEST bdev_write_zeroes 00:13:32.358 ************************************ 00:13:32.358 11:56:29 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.358 11:56:29 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:32.358 11:56:29 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:32.358 11:56:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.358 ************************************ 00:13:32.358 START TEST bdev_json_nonenclosed 00:13:32.358 ************************************ 00:13:32.358 11:56:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.358 [2024-11-18 11:56:29.904659] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:32.358 [2024-11-18 11:56:29.904775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70353 ] 00:13:32.617 [2024-11-18 11:56:30.063529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.617 [2024-11-18 11:56:30.173579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.617 [2024-11-18 11:56:30.173685] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:32.617 [2024-11-18 11:56:30.173704] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:32.617 [2024-11-18 11:56:30.173714] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:32.875 00:13:32.875 real 0m0.511s 00:13:32.875 user 0m0.307s 00:13:32.875 sys 0m0.100s 00:13:32.875 11:56:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:32.875 11:56:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:32.875 ************************************ 00:13:32.875 END TEST bdev_json_nonenclosed 00:13:32.875 ************************************ 00:13:32.875 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.875 11:56:30 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:32.875 11:56:30 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:32.875 11:56:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.875 ************************************ 00:13:32.875 START TEST bdev_json_nonarray 00:13:32.875 ************************************ 00:13:32.875 11:56:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.875 [2024-11-18 11:56:30.457286] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:13:32.875 [2024-11-18 11:56:30.457403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70384 ] 00:13:33.133 [2024-11-18 11:56:30.615483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.133 [2024-11-18 11:56:30.712955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.133 [2024-11-18 11:56:30.713053] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:33.133 [2024-11-18 11:56:30.713073] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:33.133 [2024-11-18 11:56:30.713083] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:33.390 00:13:33.390 real 0m0.496s 00:13:33.390 user 0m0.307s 00:13:33.390 sys 0m0.085s 00:13:33.390 11:56:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:33.390 11:56:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:33.390 ************************************ 00:13:33.390 END TEST bdev_json_nonarray 00:13:33.390 ************************************ 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:33.390 11:56:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:33.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:05.733 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.733 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.733 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.733 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.733 00:14:05.733 real 1m19.438s 00:14:05.733 user 1m26.248s 00:14:05.733 sys 1m33.147s 00:14:05.733 11:56:59 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:05.733 11:56:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.733 ************************************ 00:14:05.733 END TEST blockdev_xnvme 00:14:05.733 ************************************ 00:14:05.733 11:56:59 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:05.733 11:56:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:05.733 11:56:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:05.733 11:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:05.733 ************************************ 00:14:05.733 START TEST ublk 00:14:05.733 ************************************ 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:05.733 * Looking for test storage... 00:14:05.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:05.733 11:56:59 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.733 11:56:59 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.733 11:56:59 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.733 11:56:59 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.733 11:56:59 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.733 11:56:59 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.733 11:56:59 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.733 11:56:59 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.733 11:56:59 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.733 11:56:59 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.733 11:56:59 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.733 11:56:59 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:05.733 11:56:59 ublk -- scripts/common.sh@345 -- # : 1 00:14:05.733 11:56:59 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.733 11:56:59 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.733 11:56:59 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:05.733 11:56:59 ublk -- scripts/common.sh@353 -- # local d=1 00:14:05.733 11:56:59 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.733 11:56:59 ublk -- scripts/common.sh@355 -- # echo 1 00:14:05.733 11:56:59 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.733 11:56:59 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:05.733 11:56:59 ublk -- scripts/common.sh@353 -- # local d=2 00:14:05.733 11:56:59 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.733 11:56:59 ublk -- scripts/common.sh@355 -- # echo 2 00:14:05.733 11:56:59 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.733 11:56:59 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.733 11:56:59 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.733 11:56:59 ublk -- scripts/common.sh@368 -- # return 0 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.733 --rc genhtml_branch_coverage=1 00:14:05.733 --rc genhtml_function_coverage=1 00:14:05.733 --rc genhtml_legend=1 00:14:05.733 --rc geninfo_all_blocks=1 00:14:05.733 --rc geninfo_unexecuted_blocks=1 00:14:05.733 00:14:05.733 ' 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.733 --rc genhtml_branch_coverage=1 00:14:05.733 --rc genhtml_function_coverage=1 00:14:05.733 --rc genhtml_legend=1 00:14:05.733 --rc geninfo_all_blocks=1 00:14:05.733 --rc geninfo_unexecuted_blocks=1 00:14:05.733 00:14:05.733 ' 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.733 --rc genhtml_branch_coverage=1 00:14:05.733 --rc genhtml_function_coverage=1 00:14:05.733 --rc genhtml_legend=1 00:14:05.733 --rc geninfo_all_blocks=1 00:14:05.733 --rc geninfo_unexecuted_blocks=1 00:14:05.733 00:14:05.733 ' 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.733 --rc genhtml_branch_coverage=1 00:14:05.733 --rc genhtml_function_coverage=1 00:14:05.733 --rc genhtml_legend=1 00:14:05.733 --rc geninfo_all_blocks=1 00:14:05.733 --rc geninfo_unexecuted_blocks=1 00:14:05.733 00:14:05.733 ' 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:05.733 11:56:59 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:05.733 11:56:59 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:05.733 11:56:59 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:05.733 11:56:59 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:05.733 11:56:59 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:05.733 11:56:59 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:05.733 11:56:59 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:05.733 11:56:59 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:05.733 11:56:59 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:05.733 11:56:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.733 ************************************ 00:14:05.733 START TEST test_save_ublk_config 00:14:05.733 ************************************ 00:14:05.733 11:56:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:14:05.733 11:56:59 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:05.733 11:56:59 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70676 00:14:05.733 11:56:59 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:05.733 11:56:59 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70676 00:14:05.734 11:56:59 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:05.734 11:56:59 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70676 ']' 00:14:05.734 11:56:59 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.734 11:56:59 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:05.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.734 11:56:59 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.734 11:56:59 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:05.734 11:56:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:05.734 [2024-11-18 11:56:59.964399] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:14:05.734 [2024-11-18 11:56:59.964554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70676 ] 00:14:05.734 [2024-11-18 11:57:00.129565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.734 [2024-11-18 11:57:00.249724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.734 11:57:00 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:05.734 11:57:00 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:14:05.734 11:57:00 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:05.734 11:57:00 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:05.734 11:57:00 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.734 11:57:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:05.734 [2024-11-18 11:57:00.969613] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:05.734 [2024-11-18 11:57:00.970534] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:05.734 malloc0 00:14:05.734 [2024-11-18 11:57:01.041753] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:05.734 [2024-11-18 11:57:01.041846] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:05.734 [2024-11-18 11:57:01.041858] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:05.734 [2024-11-18 11:57:01.041866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.734 [2024-11-18 11:57:01.050711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.734 [2024-11-18 11:57:01.050740] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.734 [2024-11-18 11:57:01.057622] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.734 [2024-11-18 11:57:01.057749] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:05.734 [2024-11-18 11:57:01.074615] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.734 0 00:14:05.734 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.734 11:57:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:05.734 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.734 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:05.734 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.734 11:57:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:05.734 "subsystems": [ 00:14:05.734 { 00:14:05.734 "subsystem": "fsdev", 00:14:05.734 "config": [ 00:14:05.734 { 00:14:05.734 "method": "fsdev_set_opts", 00:14:05.734 "params": { 00:14:05.734 "fsdev_io_pool_size": 65535, 00:14:05.734 "fsdev_io_cache_size": 256 00:14:05.734 } 00:14:05.734 } 00:14:05.734 ] 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "subsystem": "keyring", 00:14:05.734 "config": [] 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "subsystem": "iobuf", 00:14:05.734 "config": [ 00:14:05.734 { 00:14:05.734 "method": "iobuf_set_options", 00:14:05.734 "params": { 00:14:05.734 "small_pool_count": 8192, 00:14:05.734 "large_pool_count": 1024, 00:14:05.734 "small_bufsize": 8192, 00:14:05.734 "large_bufsize": 135168, 00:14:05.734 "enable_numa": false 00:14:05.734 } 00:14:05.734 } 00:14:05.734 ] 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "subsystem": "sock", 00:14:05.734 "config": [ 00:14:05.734 { 00:14:05.734 "method": "sock_set_default_impl", 00:14:05.734 "params": { 00:14:05.734 "impl_name": "posix" 00:14:05.734 } 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "method": "sock_impl_set_options", 00:14:05.734 "params": { 00:14:05.734 "impl_name": "ssl", 00:14:05.734 "recv_buf_size": 4096, 00:14:05.734 "send_buf_size": 4096, 00:14:05.734 "enable_recv_pipe": true, 00:14:05.734 "enable_quickack": false, 00:14:05.734 "enable_placement_id": 0, 00:14:05.734 "enable_zerocopy_send_server": true, 00:14:05.734 "enable_zerocopy_send_client": false, 00:14:05.734 "zerocopy_threshold": 0, 00:14:05.734 "tls_version": 0, 00:14:05.734 "enable_ktls": false 00:14:05.734 } 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "method": "sock_impl_set_options", 00:14:05.734 "params": { 00:14:05.734 "impl_name": "posix", 00:14:05.734 "recv_buf_size": 2097152, 00:14:05.734 "send_buf_size": 2097152, 00:14:05.734 "enable_recv_pipe": true, 00:14:05.734 "enable_quickack": false, 00:14:05.734 "enable_placement_id": 0, 00:14:05.734 "enable_zerocopy_send_server": true, 00:14:05.734 "enable_zerocopy_send_client": false, 00:14:05.734 "zerocopy_threshold": 0, 00:14:05.734 "tls_version": 0, 00:14:05.734 "enable_ktls": false 00:14:05.734 } 00:14:05.734 } 00:14:05.734 ] 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "subsystem": "vmd", 00:14:05.734 "config": [] 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "subsystem": "accel", 00:14:05.734 "config": [ 00:14:05.734 { 00:14:05.734 "method": "accel_set_options", 00:14:05.734 "params": { 00:14:05.734 "small_cache_size": 128, 00:14:05.734 "large_cache_size": 16, 00:14:05.734 "task_count": 2048, 00:14:05.734 "sequence_count": 2048, 00:14:05.734 "buf_count": 2048 00:14:05.734 } 00:14:05.734 } 00:14:05.734 ] 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "subsystem": "bdev", 00:14:05.734 "config": [ 00:14:05.734 { 00:14:05.734 "method": "bdev_set_options", 00:14:05.734 "params": { 00:14:05.734 "bdev_io_pool_size": 65535, 00:14:05.734 "bdev_io_cache_size": 256, 00:14:05.734 "bdev_auto_examine": true, 00:14:05.734 "iobuf_small_cache_size": 128, 00:14:05.734 "iobuf_large_cache_size": 16 00:14:05.734 } 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "method": "bdev_raid_set_options", 00:14:05.734 "params": { 00:14:05.734 "process_window_size_kb": 1024, 00:14:05.734 "process_max_bandwidth_mb_sec": 0 00:14:05.734 } 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "method": "bdev_iscsi_set_options", 00:14:05.734 "params": { 00:14:05.734 "timeout_sec": 30 00:14:05.734 } 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "method": "bdev_nvme_set_options", 00:14:05.734 "params": { 00:14:05.734 "action_on_timeout": "none", 00:14:05.734 "timeout_us": 0, 00:14:05.734 "timeout_admin_us": 0, 00:14:05.734 "keep_alive_timeout_ms": 10000, 00:14:05.734 "arbitration_burst": 0, 00:14:05.734 "low_priority_weight": 0, 00:14:05.734 "medium_priority_weight": 0, 00:14:05.734 "high_priority_weight": 0, 00:14:05.734 "nvme_adminq_poll_period_us": 10000, 00:14:05.734 "nvme_ioq_poll_period_us": 0, 00:14:05.734 "io_queue_requests": 0, 00:14:05.734 "delay_cmd_submit": true, 00:14:05.734 "transport_retry_count": 4, 00:14:05.734 "bdev_retry_count": 3, 00:14:05.734 "transport_ack_timeout": 0, 00:14:05.734 "ctrlr_loss_timeout_sec": 0, 00:14:05.734 "reconnect_delay_sec": 0, 00:14:05.734 "fast_io_fail_timeout_sec": 0, 00:14:05.734 "disable_auto_failback": false, 00:14:05.734 "generate_uuids": false, 00:14:05.734 "transport_tos": 0, 00:14:05.734 "nvme_error_stat": false, 00:14:05.734 "rdma_srq_size": 0, 00:14:05.734 "io_path_stat": false, 00:14:05.734 "allow_accel_sequence": false, 00:14:05.734 "rdma_max_cq_size": 0, 00:14:05.734 "rdma_cm_event_timeout_ms": 0, 00:14:05.734 "dhchap_digests": [ 00:14:05.734 "sha256", 00:14:05.734 "sha384", 00:14:05.734 "sha512" 00:14:05.734 ], 00:14:05.734 "dhchap_dhgroups": [ 00:14:05.734 "null", 00:14:05.734 "ffdhe2048", 00:14:05.734 "ffdhe3072", 00:14:05.734 "ffdhe4096", 00:14:05.734 "ffdhe6144", 00:14:05.734 "ffdhe8192" 00:14:05.734 ] 00:14:05.734 } 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "method": "bdev_nvme_set_hotplug", 00:14:05.734 "params": { 00:14:05.734 "period_us": 100000, 00:14:05.734 "enable": false 00:14:05.734 } 00:14:05.734 }, 00:14:05.734 { 00:14:05.734 "method": "bdev_malloc_create", 00:14:05.734 "params": { 00:14:05.734 "name": "malloc0", 00:14:05.734 "num_blocks": 8192, 00:14:05.734 "block_size": 4096, 00:14:05.734 "physical_block_size": 4096, 00:14:05.734 "uuid": "9a5d3479-3b64-46ec-9601-ef35c87acb73", 00:14:05.735 "optimal_io_boundary": 0, 00:14:05.735 "md_size": 0, 00:14:05.735 "dif_type": 0, 00:14:05.735 "dif_is_head_of_md": false, 00:14:05.735 "dif_pi_format": 0 00:14:05.735 } 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "method": "bdev_wait_for_examine" 00:14:05.735 } 00:14:05.735 ] 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "subsystem": "scsi", 00:14:05.735 "config": null 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "subsystem": "scheduler", 00:14:05.735 "config": [ 00:14:05.735 { 00:14:05.735 "method": "framework_set_scheduler", 00:14:05.735 "params": { 00:14:05.735 "name": "static" 00:14:05.735 } 00:14:05.735 } 00:14:05.735 ] 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "subsystem": "vhost_scsi", 00:14:05.735 "config": [] 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "subsystem": "vhost_blk", 00:14:05.735 "config": [] 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "subsystem": "ublk", 00:14:05.735 "config": [ 00:14:05.735 { 00:14:05.735 "method": "ublk_create_target", 00:14:05.735 "params": { 00:14:05.735 "cpumask": "1" 00:14:05.735 } 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "method": "ublk_start_disk", 00:14:05.735 "params": { 00:14:05.735 "bdev_name": "malloc0", 00:14:05.735 "ublk_id": 0, 00:14:05.735 "num_queues": 1, 00:14:05.735 "queue_depth": 128 00:14:05.735 } 00:14:05.735 } 00:14:05.735 ] 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "subsystem": "nbd", 00:14:05.735 "config": [] 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "subsystem": "nvmf", 00:14:05.735 "config": [ 00:14:05.735 { 00:14:05.735 "method": "nvmf_set_config", 00:14:05.735 "params": { 00:14:05.735 "discovery_filter": "match_any", 00:14:05.735 "admin_cmd_passthru": { 00:14:05.735 "identify_ctrlr": false 00:14:05.735 }, 00:14:05.735 "dhchap_digests": [ 00:14:05.735 "sha256", 00:14:05.735 "sha384", 00:14:05.735 "sha512" 00:14:05.735 ], 00:14:05.735 "dhchap_dhgroups": [ 00:14:05.735 "null", 00:14:05.735 "ffdhe2048", 00:14:05.735 "ffdhe3072", 00:14:05.735 "ffdhe4096", 00:14:05.735 "ffdhe6144", 00:14:05.735 "ffdhe8192" 00:14:05.735 ] 00:14:05.735 } 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "method": "nvmf_set_max_subsystems", 00:14:05.735 "params": { 00:14:05.735 "max_subsystems": 1024 00:14:05.735 } 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "method": "nvmf_set_crdt", 00:14:05.735 "params": { 00:14:05.735 "crdt1": 0, 00:14:05.735 "crdt2": 0, 00:14:05.735 "crdt3": 0 00:14:05.735 } 00:14:05.735 } 00:14:05.735 ] 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "subsystem": "iscsi", 00:14:05.735 "config": [ 00:14:05.735 { 00:14:05.735 "method": "iscsi_set_options", 00:14:05.735 "params": { 00:14:05.735 "node_base": "iqn.2016-06.io.spdk", 00:14:05.735 "max_sessions": 128, 00:14:05.735 "max_connections_per_session": 2, 00:14:05.735 "max_queue_depth": 64, 00:14:05.735 "default_time2wait": 2, 00:14:05.735 "default_time2retain": 20, 00:14:05.735 "first_burst_length": 8192, 00:14:05.735 "immediate_data": true, 00:14:05.735 "allow_duplicated_isid": false, 00:14:05.735 "error_recovery_level": 0, 00:14:05.735 "nop_timeout": 60, 00:14:05.735 "nop_in_interval": 30, 00:14:05.735 "disable_chap": false, 00:14:05.735 "require_chap": false, 00:14:05.735 "mutual_chap": false, 00:14:05.735 "chap_group": 0, 00:14:05.735 "max_large_datain_per_connection": 64, 00:14:05.735 "max_r2t_per_connection": 4, 00:14:05.735 "pdu_pool_size": 36864, 00:14:05.735 "immediate_data_pool_size": 16384, 00:14:05.735 "data_out_pool_size": 2048 00:14:05.735 } 00:14:05.735 } 00:14:05.735 ] 00:14:05.735 } 00:14:05.735 ] 00:14:05.735 }' 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70676 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70676 ']' 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70676 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70676 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:05.735 killing process with pid 70676 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70676' 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70676 00:14:05.735 11:57:01 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70676 00:14:05.735 [2024-11-18 11:57:02.497205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.735 [2024-11-18 11:57:02.533620] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.735 [2024-11-18 11:57:02.533795] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.735 [2024-11-18 11:57:02.542635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.735 [2024-11-18 11:57:02.542702] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:05.735 [2024-11-18 11:57:02.542713] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:05.735 [2024-11-18 11:57:02.542747] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:05.735 [2024-11-18 11:57:02.542907] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70737 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70737 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70737 ']' 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:06.305 11:57:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:06.305 "subsystems": [ 00:14:06.305 { 00:14:06.305 "subsystem": "fsdev", 00:14:06.305 "config": [ 00:14:06.305 { 00:14:06.305 "method": "fsdev_set_opts", 00:14:06.305 "params": { 00:14:06.305 "fsdev_io_pool_size": 65535, 00:14:06.305 "fsdev_io_cache_size": 256 00:14:06.305 } 00:14:06.305 } 00:14:06.305 ] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "keyring", 00:14:06.305 "config": [] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "iobuf", 00:14:06.305 "config": [ 00:14:06.305 { 00:14:06.305 "method": "iobuf_set_options", 00:14:06.305 "params": { 00:14:06.305 "small_pool_count": 8192, 00:14:06.305 "large_pool_count": 1024, 00:14:06.305 "small_bufsize": 8192, 00:14:06.305 "large_bufsize": 135168, 00:14:06.305 "enable_numa": false 00:14:06.305 } 00:14:06.305 } 00:14:06.305 ] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "sock", 00:14:06.305 "config": [ 00:14:06.305 { 00:14:06.305 "method": "sock_set_default_impl", 00:14:06.305 "params": { 00:14:06.305 "impl_name": "posix" 00:14:06.305 } 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "method": "sock_impl_set_options", 00:14:06.305 "params": { 00:14:06.305 "impl_name": "ssl", 00:14:06.305 "recv_buf_size": 4096, 00:14:06.305 "send_buf_size": 4096, 00:14:06.305 "enable_recv_pipe": true, 00:14:06.305 "enable_quickack": false, 00:14:06.305 "enable_placement_id": 0, 00:14:06.305 "enable_zerocopy_send_server": true, 00:14:06.305 "enable_zerocopy_send_client": false, 00:14:06.305 "zerocopy_threshold": 0, 00:14:06.305 "tls_version": 0, 00:14:06.305 "enable_ktls": false 00:14:06.305 } 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "method": "sock_impl_set_options", 00:14:06.305 "params": { 00:14:06.305 "impl_name": "posix", 00:14:06.305 "recv_buf_size": 2097152, 00:14:06.305 "send_buf_size": 2097152, 00:14:06.305 "enable_recv_pipe": true, 00:14:06.305 "enable_quickack": false, 00:14:06.305 "enable_placement_id": 0, 00:14:06.305 "enable_zerocopy_send_server": true, 00:14:06.305 "enable_zerocopy_send_client": false, 00:14:06.305 "zerocopy_threshold": 0, 00:14:06.305 "tls_version": 0, 00:14:06.305 "enable_ktls": false 00:14:06.305 } 00:14:06.305 } 00:14:06.305 ] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "vmd", 00:14:06.305 "config": [] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "accel", 00:14:06.305 "config": [ 00:14:06.305 { 00:14:06.305 "method": "accel_set_options", 00:14:06.305 "params": { 00:14:06.305 "small_cache_size": 128, 00:14:06.305 "large_cache_size": 16, 00:14:06.305 "task_count": 2048, 00:14:06.305 "sequence_count": 2048, 00:14:06.305 "buf_count": 2048 00:14:06.305 } 00:14:06.305 } 00:14:06.305 ] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "bdev", 00:14:06.305 "config": [ 00:14:06.305 { 00:14:06.305 "method": "bdev_set_options", 00:14:06.305 "params": { 00:14:06.305 "bdev_io_pool_size": 65535, 00:14:06.305 "bdev_io_cache_size": 256, 00:14:06.305 "bdev_auto_examine": true, 00:14:06.305 "iobuf_small_cache_size": 128, 00:14:06.305 "iobuf_large_cache_size": 16 00:14:06.305 } 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "method": "bdev_raid_set_options", 00:14:06.305 "params": { 00:14:06.305 "process_window_size_kb": 1024, 00:14:06.305 "process_max_bandwidth_mb_sec": 0 00:14:06.305 } 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "method": "bdev_iscsi_set_options", 00:14:06.305 "params": { 00:14:06.305 "timeout_sec": 30 00:14:06.305 } 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "method": "bdev_nvme_set_options", 00:14:06.305 "params": { 00:14:06.305 "action_on_timeout": "none", 00:14:06.305 "timeout_us": 0, 00:14:06.305 "timeout_admin_us": 0, 00:14:06.305 "keep_alive_timeout_ms": 10000, 00:14:06.305 "arbitration_burst": 0, 00:14:06.305 "low_priority_weight": 0, 00:14:06.305 "medium_priority_weight": 0, 00:14:06.305 "high_priority_weight": 0, 00:14:06.305 "nvme_adminq_poll_period_us": 10000, 00:14:06.305 "nvme_ioq_poll_period_us": 0, 00:14:06.305 "io_queue_requests": 0, 00:14:06.305 "delay_cmd_submit": true, 00:14:06.305 "transport_retry_count": 4, 00:14:06.305 "bdev_retry_count": 3, 00:14:06.305 "transport_ack_timeout": 0, 00:14:06.305 "ctrlr_loss_timeout_sec": 0, 00:14:06.305 "reconnect_delay_sec": 0, 00:14:06.305 "fast_io_fail_timeout_sec": 0, 00:14:06.305 "disable_auto_failback": false, 00:14:06.305 "generate_uuids": false, 00:14:06.305 "transport_tos": 0, 00:14:06.305 "nvme_error_stat": false, 00:14:06.305 "rdma_srq_size": 0, 00:14:06.305 "io_path_stat": false, 00:14:06.305 "allow_accel_sequence": false, 00:14:06.305 "rdma_max_cq_size": 0, 00:14:06.305 "rdma_cm_event_timeout_ms": 0, 00:14:06.305 "dhchap_digests": [ 00:14:06.305 "sha256", 00:14:06.305 "sha384", 00:14:06.305 "sha512" 00:14:06.305 ], 00:14:06.305 "dhchap_dhgroups": [ 00:14:06.305 "null", 00:14:06.305 "ffdhe2048", 00:14:06.305 "ffdhe3072", 00:14:06.305 "ffdhe4096", 00:14:06.305 "ffdhe6144", 00:14:06.305 "ffdhe8192" 00:14:06.305 ] 00:14:06.305 } 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "method": "bdev_nvme_set_hotplug", 00:14:06.305 "params": { 00:14:06.305 "period_us": 100000, 00:14:06.305 "enable": false 00:14:06.305 } 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "method": "bdev_malloc_create", 00:14:06.305 "params": { 00:14:06.305 "name": "malloc0", 00:14:06.305 "num_blocks": 8192, 00:14:06.305 "block_size": 4096, 00:14:06.305 "physical_block_size": 4096, 00:14:06.305 "uuid": "9a5d3479-3b64-46ec-9601-ef35c87acb73", 00:14:06.305 "optimal_io_boundary": 0, 00:14:06.305 "md_size": 0, 00:14:06.305 "dif_type": 0, 00:14:06.305 "dif_is_head_of_md": false, 00:14:06.305 "dif_pi_format": 0 00:14:06.305 } 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "method": "bdev_wait_for_examine" 00:14:06.305 } 00:14:06.305 ] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "scsi", 00:14:06.305 "config": null 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "scheduler", 00:14:06.305 "config": [ 00:14:06.305 { 00:14:06.305 "method": "framework_set_scheduler", 00:14:06.305 "params": { 00:14:06.305 "name": "static" 00:14:06.305 } 00:14:06.305 } 00:14:06.305 ] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "vhost_scsi", 00:14:06.305 "config": [] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "vhost_blk", 00:14:06.305 "config": [] 00:14:06.305 }, 00:14:06.305 { 00:14:06.305 "subsystem": "ublk", 00:14:06.305 "config": [ 00:14:06.306 { 00:14:06.306 "method": "ublk_create_target", 00:14:06.306 "params": { 00:14:06.306 "cpumask": "1" 00:14:06.306 } 00:14:06.306 }, 00:14:06.306 { 00:14:06.306 "method": "ublk_start_disk", 00:14:06.306 "params": { 00:14:06.306 "bdev_name": "malloc0", 00:14:06.306 "ublk_id": 0, 00:14:06.306 "num_queues": 1, 00:14:06.306 "queue_depth": 128 00:14:06.306 } 00:14:06.306 } 00:14:06.306 ] 00:14:06.306 }, 00:14:06.306 { 00:14:06.306 "subsystem": "nbd", 00:14:06.306 "config": [] 00:14:06.306 }, 00:14:06.306 { 00:14:06.306 "subsystem": "nvmf", 00:14:06.306 "config": [ 00:14:06.306 { 00:14:06.306 "method": "nvmf_set_config", 00:14:06.306 "params": { 00:14:06.306 "discovery_filter": "match_any", 00:14:06.306 "admin_cmd_passthru": { 00:14:06.306 "identify_ctrlr": false 00:14:06.306 }, 00:14:06.306 "dhchap_digests": [ 00:14:06.306 "sha256", 00:14:06.306 "sha384", 00:14:06.306 "sha512" 00:14:06.306 ], 00:14:06.306 "dhchap_dhgroups": [ 00:14:06.306 "null", 00:14:06.306 "ffdhe2048", 00:14:06.306 "ffdhe3072", 00:14:06.306 "ffdhe4096", 00:14:06.306 "ffdhe6144", 00:14:06.306 "ffdhe8192" 00:14:06.306 ] 00:14:06.306 } 00:14:06.306 }, 00:14:06.306 { 00:14:06.306 "method": "nvmf_set_max_subsystems", 00:14:06.306 "params": { 00:14:06.306 "max_subsystems": 1024 00:14:06.306 } 00:14:06.306 }, 00:14:06.306 { 00:14:06.306 "method": "nvmf_set_crdt", 00:14:06.306 "params": { 00:14:06.306 "crdt1": 0, 00:14:06.306 "crdt2": 0, 00:14:06.306 "crdt3": 0 00:14:06.306 } 00:14:06.306 } 00:14:06.306 ] 00:14:06.306 }, 00:14:06.306 { 00:14:06.306 "subsystem": "iscsi", 00:14:06.306 "config": [ 00:14:06.306 { 00:14:06.306 "method": "iscsi_set_options", 00:14:06.306 "params": { 00:14:06.306 "node_base": "iqn.2016-06.io.spdk", 00:14:06.306 "max_sessions": 128, 00:14:06.306 "max_connections_per_session": 2, 00:14:06.306 "max_queue_depth": 64, 00:14:06.306 "default_time2wait": 2, 00:14:06.306 "default_time2retain": 20, 00:14:06.306 "first_burst_length": 8192, 00:14:06.306 "immediate_data": true, 00:14:06.306 "allow_duplicated_isid": false, 00:14:06.306 "error_recovery_level": 0, 00:14:06.306 "nop_timeout": 60, 00:14:06.306 "nop_in_interval": 30, 00:14:06.306 "disable_chap": false, 00:14:06.306 "require_chap": false, 00:14:06.306 "mutual_chap": false, 00:14:06.306 "chap_group": 0, 00:14:06.306 "max_large_datain_per_connection": 64, 00:14:06.306 "max_r2t_per_connection": 4, 00:14:06.306 "pdu_pool_size": 36864, 00:14:06.306 "immediate_data_pool_size": 16384, 00:14:06.306 "data_out_pool_size": 2048 00:14:06.306 } 00:14:06.306 } 00:14:06.306 ] 00:14:06.306 } 00:14:06.306 ] 00:14:06.306 }' 00:14:06.306 [2024-11-18 11:57:03.971293] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:14:06.306 [2024-11-18 11:57:03.971759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70737 ] 00:14:06.565 [2024-11-18 11:57:04.128829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.565 [2024-11-18 11:57:04.211879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.500 [2024-11-18 11:57:04.842598] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:07.500 [2024-11-18 11:57:04.843227] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:07.500 [2024-11-18 11:57:04.850686] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:07.501 [2024-11-18 11:57:04.850745] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:07.501 [2024-11-18 11:57:04.850751] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:07.501 [2024-11-18 11:57:04.850756] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:07.501 [2024-11-18 11:57:04.859648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:07.501 [2024-11-18 11:57:04.859666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:07.501 [2024-11-18 11:57:04.866604] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:07.501 [2024-11-18 11:57:04.866671] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:07.501 [2024-11-18 11:57:04.883599] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70737 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70737 ']' 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70737 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70737 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:07.501 killing process with pid 70737 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70737' 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70737 00:14:07.501 11:57:04 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70737 00:14:08.435 [2024-11-18 11:57:05.957569] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.435 [2024-11-18 11:57:05.994609] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.435 [2024-11-18 11:57:05.994711] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.435 [2024-11-18 11:57:05.995701] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.435 [2024-11-18 11:57:05.995738] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:08.435 [2024-11-18 11:57:05.995744] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:08.435 [2024-11-18 11:57:05.995764] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.435 [2024-11-18 11:57:05.995872] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:09.811 11:57:07 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:09.811 00:14:09.811 real 0m7.290s 00:14:09.811 user 0m5.050s 00:14:09.811 sys 0m2.870s 00:14:09.811 11:57:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.811 11:57:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:09.811 ************************************ 00:14:09.811 END TEST test_save_ublk_config 00:14:09.811 ************************************ 00:14:09.811 11:57:07 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70806 00:14:09.811 11:57:07 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.811 11:57:07 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70806 00:14:09.811 11:57:07 ublk -- common/autotest_common.sh@833 -- # '[' -z 70806 ']' 00:14:09.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.811 11:57:07 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.811 11:57:07 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:09.811 11:57:07 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.811 11:57:07 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:09.811 11:57:07 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:09.811 11:57:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.811 [2024-11-18 11:57:07.276138] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:14:09.811 [2024-11-18 11:57:07.276263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70806 ] 00:14:09.811 [2024-11-18 11:57:07.433685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:10.071 [2024-11-18 11:57:07.520568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.071 [2024-11-18 11:57:07.520631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.642 11:57:08 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:10.643 11:57:08 ublk -- common/autotest_common.sh@866 -- # return 0 00:14:10.643 11:57:08 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:10.643 11:57:08 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:10.643 11:57:08 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.643 11:57:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.643 ************************************ 00:14:10.643 START TEST test_create_ublk 00:14:10.643 ************************************ 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:14:10.643 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.643 [2024-11-18 11:57:08.089605] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:10.643 [2024-11-18 11:57:08.091417] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.643 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:10.643 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.643 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:10.643 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.643 [2024-11-18 11:57:08.286757] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:10.643 [2024-11-18 11:57:08.287150] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:10.643 [2024-11-18 11:57:08.287165] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:10.643 [2024-11-18 11:57:08.287176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:10.643 [2024-11-18 11:57:08.294629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:10.643 [2024-11-18 11:57:08.294655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:10.643 [2024-11-18 11:57:08.302627] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:10.643 [2024-11-18 11:57:08.310667] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:10.643 [2024-11-18 11:57:08.326660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.643 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:10.643 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:10.643 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.643 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.903 11:57:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:10.903 { 00:14:10.903 "ublk_device": "/dev/ublkb0", 00:14:10.903 "id": 0, 00:14:10.903 "queue_depth": 512, 00:14:10.903 "num_queues": 4, 00:14:10.903 "bdev_name": "Malloc0" 00:14:10.903 } 00:14:10.903 ]' 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:10.903 11:57:08 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:10.903 11:57:08 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:11.162 fio: verification read phase will never start because write phase uses all of runtime 00:14:11.162 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:11.162 fio-3.35 00:14:11.162 Starting 1 process 00:14:21.129 00:14:21.129 fio_test: (groupid=0, jobs=1): err= 0: pid=70845: Mon Nov 18 11:57:18 2024 00:14:21.129 write: IOPS=15.1k, BW=59.1MiB/s (61.9MB/s)(591MiB/10001msec); 0 zone resets 00:14:21.129 clat (usec): min=38, max=3944, avg=65.42, stdev=96.38 00:14:21.129 lat (usec): min=38, max=3945, avg=65.83, stdev=96.39 00:14:21.129 clat percentiles (usec): 00:14:21.129 | 1.00th=[ 47], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 57], 00:14:21.129 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 63], 00:14:21.129 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 73], 00:14:21.129 | 99.00th=[ 84], 99.50th=[ 95], 99.90th=[ 2057], 99.95th=[ 2802], 00:14:21.129 | 99.99th=[ 3392] 00:14:21.129 bw ( KiB/s): min=54496, max=63552, per=100.00%, avg=60542.74, stdev=2019.80, samples=19 00:14:21.129 iops : min=13624, max=15888, avg=15135.68, stdev=504.95, samples=19 00:14:21.129 lat (usec) : 50=1.85%, 100=97.70%, 250=0.24%, 500=0.03%, 750=0.01% 00:14:21.129 lat (usec) : 1000=0.01% 00:14:21.129 lat (msec) : 2=0.05%, 4=0.10% 00:14:21.129 cpu : usr=1.89%, sys=13.12%, ctx=151197, majf=0, minf=796 00:14:21.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.129 issued rwts: total=0,151197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.129 00:14:21.129 Run status group 0 (all jobs): 00:14:21.129 WRITE: bw=59.1MiB/s (61.9MB/s), 59.1MiB/s-59.1MiB/s (61.9MB/s-61.9MB/s), io=591MiB (619MB), run=10001-10001msec 00:14:21.129 00:14:21.129 Disk stats (read/write): 00:14:21.129 ublkb0: ios=0/149585, merge=0/0, ticks=0/8289, in_queue=8290, util=99.10% 00:14:21.129 11:57:18 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.129 [2024-11-18 11:57:18.728919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:21.129 [2024-11-18 11:57:18.764124] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:21.129 [2024-11-18 11:57:18.765053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:21.129 [2024-11-18 11:57:18.771620] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:21.129 [2024-11-18 11:57:18.771870] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:21.129 [2024-11-18 11:57:18.771884] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.129 11:57:18 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.129 [2024-11-18 11:57:18.787657] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:21.129 request: 00:14:21.129 { 00:14:21.129 "ublk_id": 0, 00:14:21.129 "method": "ublk_stop_disk", 00:14:21.129 "req_id": 1 00:14:21.129 } 00:14:21.129 Got JSON-RPC error response 00:14:21.129 response: 00:14:21.129 { 00:14:21.129 "code": -19, 00:14:21.129 "message": "No such device" 00:14:21.129 } 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.129 11:57:18 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.129 [2024-11-18 11:57:18.803668] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:21.129 [2024-11-18 11:57:18.811597] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:21.129 [2024-11-18 11:57:18.811626] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.129 11:57:18 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.129 11:57:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.695 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.696 11:57:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:21.696 11:57:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:21.696 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.696 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.696 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.696 11:57:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:21.696 11:57:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:21.696 11:57:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:21.696 11:57:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:21.696 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.696 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.696 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.696 11:57:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:21.696 11:57:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:21.696 11:57:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:21.696 00:14:21.696 real 0m11.190s 00:14:21.696 user 0m0.483s 00:14:21.696 sys 0m1.383s 00:14:21.696 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:21.696 ************************************ 00:14:21.696 END TEST test_create_ublk 00:14:21.696 ************************************ 00:14:21.696 11:57:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.696 11:57:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:21.696 11:57:19 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:21.696 11:57:19 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:21.696 11:57:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.696 ************************************ 00:14:21.696 START TEST test_create_multi_ublk 00:14:21.696 ************************************ 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.696 [2024-11-18 11:57:19.322591] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:21.696 [2024-11-18 11:57:19.324153] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.696 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:21.953 [2024-11-18 11:57:19.550700] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:21.953 [2024-11-18 11:57:19.550996] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:21.953 [2024-11-18 11:57:19.551007] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:21.953 [2024-11-18 11:57:19.551015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.953 [2024-11-18 11:57:19.562631] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.953 [2024-11-18 11:57:19.562652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.953 [2024-11-18 11:57:19.574614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.953 [2024-11-18 11:57:19.575101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:21.953 [2024-11-18 11:57:19.588615] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:21.953 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:21.954 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.954 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.211 [2024-11-18 11:57:19.800693] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:22.211 [2024-11-18 11:57:19.800983] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:22.211 [2024-11-18 11:57:19.800996] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:22.211 [2024-11-18 11:57:19.801001] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:22.211 [2024-11-18 11:57:19.809774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:22.211 [2024-11-18 11:57:19.809790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:22.211 [2024-11-18 11:57:19.816607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:22.211 [2024-11-18 11:57:19.817094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:22.211 [2024-11-18 11:57:19.825626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.211 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.469 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.469 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:22.469 11:57:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:22.469 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.469 11:57:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.469 [2024-11-18 11:57:19.984685] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:22.469 [2024-11-18 11:57:19.984985] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:22.469 [2024-11-18 11:57:19.984996] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:22.469 [2024-11-18 11:57:19.985003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:22.469 [2024-11-18 11:57:19.992789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:22.469 [2024-11-18 11:57:19.992809] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:22.469 [2024-11-18 11:57:20.000608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:22.469 [2024-11-18 11:57:20.001112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:22.469 [2024-11-18 11:57:20.005462] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.469 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.469 [2024-11-18 11:57:20.164709] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:22.469 [2024-11-18 11:57:20.164999] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:22.469 [2024-11-18 11:57:20.165011] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:22.469 [2024-11-18 11:57:20.165016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:22.727 [2024-11-18 11:57:20.172632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:22.728 [2024-11-18 11:57:20.172648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:22.728 [2024-11-18 11:57:20.180604] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:22.728 [2024-11-18 11:57:20.181082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:22.728 [2024-11-18 11:57:20.184536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:22.728 { 00:14:22.728 "ublk_device": "/dev/ublkb0", 00:14:22.728 "id": 0, 00:14:22.728 "queue_depth": 512, 00:14:22.728 "num_queues": 4, 00:14:22.728 "bdev_name": "Malloc0" 00:14:22.728 }, 00:14:22.728 { 00:14:22.728 "ublk_device": "/dev/ublkb1", 00:14:22.728 "id": 1, 00:14:22.728 "queue_depth": 512, 00:14:22.728 "num_queues": 4, 00:14:22.728 "bdev_name": "Malloc1" 00:14:22.728 }, 00:14:22.728 { 00:14:22.728 "ublk_device": "/dev/ublkb2", 00:14:22.728 "id": 2, 00:14:22.728 "queue_depth": 512, 00:14:22.728 "num_queues": 4, 00:14:22.728 "bdev_name": "Malloc2" 00:14:22.728 }, 00:14:22.728 { 00:14:22.728 "ublk_device": "/dev/ublkb3", 00:14:22.728 "id": 3, 00:14:22.728 "queue_depth": 512, 00:14:22.728 "num_queues": 4, 00:14:22.728 "bdev_name": "Malloc3" 00:14:22.728 } 00:14:22.728 ]' 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:22.728 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:22.986 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.244 [2024-11-18 11:57:20.816676] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.244 [2024-11-18 11:57:20.858117] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.244 [2024-11-18 11:57:20.859243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.244 [2024-11-18 11:57:20.864626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.244 [2024-11-18 11:57:20.864864] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:23.244 [2024-11-18 11:57:20.864877] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.244 [2024-11-18 11:57:20.880655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.244 [2024-11-18 11:57:20.911647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.244 [2024-11-18 11:57:20.912480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.244 [2024-11-18 11:57:20.920604] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.244 [2024-11-18 11:57:20.920863] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:23.244 [2024-11-18 11:57:20.920877] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.244 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.244 [2024-11-18 11:57:20.927675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.503 [2024-11-18 11:57:20.974616] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.503 [2024-11-18 11:57:20.975415] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.503 [2024-11-18 11:57:20.984600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.503 [2024-11-18 11:57:20.984847] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:23.503 [2024-11-18 11:57:20.984861] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:23.503 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.503 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.503 11:57:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:23.503 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.503 11:57:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.503 [2024-11-18 11:57:20.992662] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:23.503 [2024-11-18 11:57:21.038633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:23.503 [2024-11-18 11:57:21.039313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:23.503 [2024-11-18 11:57:21.049631] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:23.503 [2024-11-18 11:57:21.049875] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:23.503 [2024-11-18 11:57:21.049882] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:23.503 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.503 11:57:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:23.761 [2024-11-18 11:57:21.240654] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:23.761 [2024-11-18 11:57:21.248606] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:23.761 [2024-11-18 11:57:21.248634] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:23.761 11:57:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:23.761 11:57:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.761 11:57:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:23.761 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.761 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.019 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.019 11:57:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.019 11:57:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:24.019 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.019 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.277 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.277 11:57:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.277 11:57:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:24.277 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.277 11:57:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.535 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.535 11:57:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.535 11:57:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:24.535 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.535 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:24.793 00:14:24.793 real 0m3.138s 00:14:24.793 user 0m0.781s 00:14:24.793 sys 0m0.149s 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.793 11:57:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.793 ************************************ 00:14:24.793 END TEST test_create_multi_ublk 00:14:24.793 ************************************ 00:14:24.793 11:57:22 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:24.793 11:57:22 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:24.793 11:57:22 ublk -- ublk/ublk.sh@130 -- # killprocess 70806 00:14:24.793 11:57:22 ublk -- common/autotest_common.sh@952 -- # '[' -z 70806 ']' 00:14:24.793 11:57:22 ublk -- common/autotest_common.sh@956 -- # kill -0 70806 00:14:24.793 11:57:22 ublk -- common/autotest_common.sh@957 -- # uname 00:14:24.793 11:57:22 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:24.793 11:57:22 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70806 00:14:25.052 11:57:22 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:25.052 killing process with pid 70806 00:14:25.052 11:57:22 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:25.052 11:57:22 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70806' 00:14:25.052 11:57:22 ublk -- common/autotest_common.sh@971 -- # kill 70806 00:14:25.052 11:57:22 ublk -- common/autotest_common.sh@976 -- # wait 70806 00:14:25.619 [2024-11-18 11:57:23.025037] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:25.619 [2024-11-18 11:57:23.025084] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:26.187 00:14:26.187 real 0m23.976s 00:14:26.187 user 0m34.175s 00:14:26.187 sys 0m9.339s 00:14:26.187 11:57:23 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:26.187 11:57:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.187 ************************************ 00:14:26.187 END TEST ublk 00:14:26.187 ************************************ 00:14:26.187 11:57:23 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:26.187 11:57:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:26.187 11:57:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:26.187 11:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:26.187 ************************************ 00:14:26.187 START TEST ublk_recovery 00:14:26.187 ************************************ 00:14:26.187 11:57:23 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:26.187 * Looking for test storage... 00:14:26.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:26.187 11:57:23 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:26.187 11:57:23 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:26.187 11:57:23 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:26.187 11:57:23 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:26.187 11:57:23 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.187 11:57:23 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.187 11:57:23 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.188 11:57:23 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:26.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.188 --rc genhtml_branch_coverage=1 00:14:26.188 --rc genhtml_function_coverage=1 00:14:26.188 --rc genhtml_legend=1 00:14:26.188 --rc geninfo_all_blocks=1 00:14:26.188 --rc geninfo_unexecuted_blocks=1 00:14:26.188 00:14:26.188 ' 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:26.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.188 --rc genhtml_branch_coverage=1 00:14:26.188 --rc genhtml_function_coverage=1 00:14:26.188 --rc genhtml_legend=1 00:14:26.188 --rc geninfo_all_blocks=1 00:14:26.188 --rc geninfo_unexecuted_blocks=1 00:14:26.188 00:14:26.188 ' 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:26.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.188 --rc genhtml_branch_coverage=1 00:14:26.188 --rc genhtml_function_coverage=1 00:14:26.188 --rc genhtml_legend=1 00:14:26.188 --rc geninfo_all_blocks=1 00:14:26.188 --rc geninfo_unexecuted_blocks=1 00:14:26.188 00:14:26.188 ' 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:26.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.188 --rc genhtml_branch_coverage=1 00:14:26.188 --rc genhtml_function_coverage=1 00:14:26.188 --rc genhtml_legend=1 00:14:26.188 --rc geninfo_all_blocks=1 00:14:26.188 --rc geninfo_unexecuted_blocks=1 00:14:26.188 00:14:26.188 ' 00:14:26.188 11:57:23 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:26.188 11:57:23 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:26.188 11:57:23 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:26.188 11:57:23 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:26.188 11:57:23 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:26.188 11:57:23 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:26.188 11:57:23 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:26.188 11:57:23 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:26.188 11:57:23 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:26.188 11:57:23 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:26.188 11:57:23 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71189 00:14:26.188 11:57:23 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.188 11:57:23 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71189 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71189 ']' 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:26.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:26.188 11:57:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.188 11:57:23 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:26.449 [2024-11-18 11:57:23.925253] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:14:26.449 [2024-11-18 11:57:23.925381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71189 ] 00:14:26.449 [2024-11-18 11:57:24.086982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:26.711 [2024-11-18 11:57:24.187488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.711 [2024-11-18 11:57:24.187624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:27.283 11:57:24 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.283 [2024-11-18 11:57:24.783603] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:27.283 [2024-11-18 11:57:24.785457] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.283 11:57:24 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.283 malloc0 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.283 11:57:24 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.283 11:57:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.283 [2024-11-18 11:57:24.887735] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:27.284 [2024-11-18 11:57:24.887832] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:27.284 [2024-11-18 11:57:24.887843] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:27.284 [2024-11-18 11:57:24.887852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.284 [2024-11-18 11:57:24.896706] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.284 [2024-11-18 11:57:24.896725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.284 [2024-11-18 11:57:24.903617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.284 [2024-11-18 11:57:24.903757] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:27.284 [2024-11-18 11:57:24.914623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.284 1 00:14:27.284 11:57:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.284 11:57:24 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:28.662 11:57:25 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71224 00:14:28.662 11:57:25 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:28.662 11:57:25 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:28.662 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:28.662 fio-3.35 00:14:28.662 Starting 1 process 00:14:33.928 11:57:30 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71189 00:14:33.928 11:57:30 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:39.258 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71189 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:39.258 11:57:35 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71335 00:14:39.258 11:57:35 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:39.258 11:57:35 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71335 00:14:39.258 11:57:35 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:39.258 11:57:35 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71335 ']' 00:14:39.258 11:57:35 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.258 11:57:35 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:39.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.258 11:57:35 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.258 11:57:35 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:39.258 11:57:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:39.258 [2024-11-18 11:57:36.019372] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:14:39.258 [2024-11-18 11:57:36.019984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:14:39.258 [2024-11-18 11:57:36.189639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:39.259 [2024-11-18 11:57:36.310422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.259 [2024-11-18 11:57:36.310533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.259 11:57:36 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:39.259 11:57:36 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:39.259 11:57:36 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:39.259 11:57:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.259 11:57:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:39.259 [2024-11-18 11:57:36.912610] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:39.259 [2024-11-18 11:57:36.914418] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:39.259 11:57:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.259 11:57:36 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:39.259 11:57:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.259 11:57:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:39.520 malloc0 00:14:39.520 11:57:37 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.520 11:57:37 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:39.520 11:57:37 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.520 11:57:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:39.520 [2024-11-18 11:57:37.013738] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:39.520 [2024-11-18 11:57:37.013774] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:39.520 [2024-11-18 11:57:37.013784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:39.520 [2024-11-18 11:57:37.021641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:39.520 [2024-11-18 11:57:37.021662] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:39.520 1 00:14:39.520 11:57:37 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.520 11:57:37 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71224 00:14:40.458 [2024-11-18 11:57:38.021703] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:40.458 [2024-11-18 11:57:38.029607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:40.458 [2024-11-18 11:57:38.029623] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:41.392 [2024-11-18 11:57:39.029653] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:41.392 [2024-11-18 11:57:39.037604] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:41.393 [2024-11-18 11:57:39.037624] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:42.342 [2024-11-18 11:57:40.037647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:42.599 [2024-11-18 11:57:40.045607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:42.599 [2024-11-18 11:57:40.045622] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:42.599 [2024-11-18 11:57:40.045630] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:42.599 [2024-11-18 11:57:40.045697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:04.520 [2024-11-18 11:58:00.963612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:04.520 [2024-11-18 11:58:00.970169] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:04.520 [2024-11-18 11:58:00.977611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:04.520 [2024-11-18 11:58:00.977628] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:31.113 00:15:31.113 fio_test: (groupid=0, jobs=1): err= 0: pid=71227: Mon Nov 18 11:58:26 2024 00:15:31.113 read: IOPS=13.5k, BW=52.9MiB/s (55.5MB/s)(3174MiB/60001msec) 00:15:31.113 slat (nsec): min=1169, max=288871, avg=5228.19, stdev=1592.19 00:15:31.113 clat (usec): min=1307, max=30060k, avg=4419.93, stdev=251726.23 00:15:31.113 lat (usec): min=1312, max=30060k, avg=4425.16, stdev=251726.22 00:15:31.113 clat percentiles (usec): 00:15:31.113 | 1.00th=[ 1876], 5.00th=[ 1958], 10.00th=[ 2008], 20.00th=[ 2114], 00:15:31.113 | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2180], 60.00th=[ 2212], 00:15:31.113 | 70.00th=[ 2212], 80.00th=[ 2245], 90.00th=[ 2311], 95.00th=[ 3359], 00:15:31.113 | 99.00th=[ 5604], 99.50th=[ 5997], 99.90th=[ 7504], 99.95th=[ 8848], 00:15:31.113 | 99.99th=[12780] 00:15:31.113 bw ( KiB/s): min=18280, max=121544, per=100.00%, avg=106662.00, stdev=16979.65, samples=60 00:15:31.113 iops : min= 4570, max=30386, avg=26665.50, stdev=4244.91, samples=60 00:15:31.113 write: IOPS=13.5k, BW=52.8MiB/s (55.4MB/s)(3170MiB/60001msec); 0 zone resets 00:15:31.113 slat (nsec): min=1215, max=254369, avg=5482.22, stdev=1636.88 00:15:31.113 clat (usec): min=1352, max=30060k, avg=5025.68, stdev=281102.86 00:15:31.113 lat (usec): min=1357, max=30060k, avg=5031.17, stdev=281102.85 00:15:31.113 clat percentiles (usec): 00:15:31.113 | 1.00th=[ 1942], 5.00th=[ 2057], 10.00th=[ 2089], 20.00th=[ 2212], 00:15:31.113 | 30.00th=[ 2245], 40.00th=[ 2278], 50.00th=[ 2278], 60.00th=[ 2311], 00:15:31.113 | 70.00th=[ 2311], 80.00th=[ 2343], 90.00th=[ 2409], 95.00th=[ 3326], 00:15:31.113 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7701], 99.95th=[ 8979], 00:15:31.113 | 99.99th=[12911] 00:15:31.113 bw ( KiB/s): min=19112, max=121936, per=100.00%, avg=106496.00, stdev=16887.07, samples=60 00:15:31.113 iops : min= 4778, max=30484, avg=26624.00, stdev=4221.77, samples=60 00:15:31.113 lat (msec) : 2=5.51%, 4=91.01%, 10=3.44%, 20=0.03%, >=2000=0.01% 00:15:31.113 cpu : usr=3.06%, sys=14.76%, ctx=53403, majf=0, minf=13 00:15:31.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:31.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:31.113 issued rwts: total=812580,811646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:31.113 00:15:31.113 Run status group 0 (all jobs): 00:15:31.113 READ: bw=52.9MiB/s (55.5MB/s), 52.9MiB/s-52.9MiB/s (55.5MB/s-55.5MB/s), io=3174MiB (3328MB), run=60001-60001msec 00:15:31.113 WRITE: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=3170MiB (3325MB), run=60001-60001msec 00:15:31.113 00:15:31.113 Disk stats (read/write): 00:15:31.113 ublkb1: ios=809672/808610, merge=0/0, ticks=3544182/3964684, in_queue=7508867, util=99.89% 00:15:31.113 11:58:26 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.113 [2024-11-18 11:58:26.172090] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:31.113 [2024-11-18 11:58:26.207711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:31.113 [2024-11-18 11:58:26.207862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:31.113 [2024-11-18 11:58:26.215617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:31.113 [2024-11-18 11:58:26.215715] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:31.113 [2024-11-18 11:58:26.215722] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.113 11:58:26 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.113 [2024-11-18 11:58:26.231689] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:31.113 [2024-11-18 11:58:26.239601] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:31.113 [2024-11-18 11:58:26.239630] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.113 11:58:26 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:31.113 11:58:26 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:31.113 11:58:26 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71335 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71335 ']' 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71335 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71335 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:31.113 killing process with pid 71335 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71335' 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71335 00:15:31.113 11:58:26 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71335 00:15:31.113 [2024-11-18 11:58:27.290577] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:31.113 [2024-11-18 11:58:27.290628] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:31.113 00:15:31.113 real 1m4.276s 00:15:31.113 user 1m46.526s 00:15:31.113 sys 0m21.966s 00:15:31.113 11:58:27 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:31.113 11:58:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.113 ************************************ 00:15:31.113 END TEST ublk_recovery 00:15:31.113 ************************************ 00:15:31.113 11:58:28 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:31.113 11:58:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:31.113 11:58:28 -- common/autotest_common.sh@10 -- # set +x 00:15:31.113 11:58:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:31.113 11:58:28 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:31.113 11:58:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:31.113 11:58:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:31.113 11:58:28 -- common/autotest_common.sh@10 -- # set +x 00:15:31.113 ************************************ 00:15:31.113 START TEST ftl 00:15:31.113 ************************************ 00:15:31.113 11:58:28 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:31.113 * Looking for test storage... 00:15:31.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:31.113 11:58:28 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:31.113 11:58:28 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:15:31.113 11:58:28 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:31.113 11:58:28 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:31.113 11:58:28 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.113 11:58:28 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.113 11:58:28 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.113 11:58:28 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.113 11:58:28 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.113 11:58:28 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.113 11:58:28 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.113 11:58:28 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.113 11:58:28 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.113 11:58:28 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.113 11:58:28 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.113 11:58:28 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:31.113 11:58:28 ftl -- scripts/common.sh@345 -- # : 1 00:15:31.113 11:58:28 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.113 11:58:28 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.113 11:58:28 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:31.113 11:58:28 ftl -- scripts/common.sh@353 -- # local d=1 00:15:31.113 11:58:28 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.113 11:58:28 ftl -- scripts/common.sh@355 -- # echo 1 00:15:31.113 11:58:28 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.113 11:58:28 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:31.113 11:58:28 ftl -- scripts/common.sh@353 -- # local d=2 00:15:31.113 11:58:28 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.113 11:58:28 ftl -- scripts/common.sh@355 -- # echo 2 00:15:31.113 11:58:28 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.113 11:58:28 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.113 11:58:28 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.113 11:58:28 ftl -- scripts/common.sh@368 -- # return 0 00:15:31.113 11:58:28 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.113 11:58:28 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:31.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.113 --rc genhtml_branch_coverage=1 00:15:31.113 --rc genhtml_function_coverage=1 00:15:31.113 --rc genhtml_legend=1 00:15:31.113 --rc geninfo_all_blocks=1 00:15:31.113 --rc geninfo_unexecuted_blocks=1 00:15:31.113 00:15:31.113 ' 00:15:31.113 11:58:28 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:31.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.113 --rc genhtml_branch_coverage=1 00:15:31.113 --rc genhtml_function_coverage=1 00:15:31.114 --rc genhtml_legend=1 00:15:31.114 --rc geninfo_all_blocks=1 00:15:31.114 --rc geninfo_unexecuted_blocks=1 00:15:31.114 00:15:31.114 ' 00:15:31.114 11:58:28 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:31.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.114 --rc genhtml_branch_coverage=1 00:15:31.114 --rc genhtml_function_coverage=1 00:15:31.114 --rc genhtml_legend=1 00:15:31.114 --rc geninfo_all_blocks=1 00:15:31.114 --rc geninfo_unexecuted_blocks=1 00:15:31.114 00:15:31.114 ' 00:15:31.114 11:58:28 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:31.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.114 --rc genhtml_branch_coverage=1 00:15:31.114 --rc genhtml_function_coverage=1 00:15:31.114 --rc genhtml_legend=1 00:15:31.114 --rc geninfo_all_blocks=1 00:15:31.114 --rc geninfo_unexecuted_blocks=1 00:15:31.114 00:15:31.114 ' 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:31.114 11:58:28 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:31.114 11:58:28 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:31.114 11:58:28 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:31.114 11:58:28 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:31.114 11:58:28 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:31.114 11:58:28 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.114 11:58:28 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:31.114 11:58:28 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:31.114 11:58:28 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:31.114 11:58:28 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:31.114 11:58:28 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:31.114 11:58:28 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:31.114 11:58:28 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:31.114 11:58:28 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:31.114 11:58:28 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:31.114 11:58:28 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:31.114 11:58:28 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:31.114 11:58:28 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:31.114 11:58:28 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:31.114 11:58:28 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:31.114 11:58:28 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:31.114 11:58:28 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:31.114 11:58:28 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:31.114 11:58:28 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:31.114 11:58:28 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:31.114 11:58:28 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:31.114 11:58:28 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:31.114 11:58:28 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:31.114 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:31.114 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:31.114 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:31.114 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:31.114 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72145 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72145 00:15:31.114 11:58:28 ftl -- common/autotest_common.sh@833 -- # '[' -z 72145 ']' 00:15:31.114 11:58:28 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:31.114 11:58:28 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.114 11:58:28 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:31.114 11:58:28 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.114 11:58:28 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:31.114 11:58:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:31.375 [2024-11-18 11:58:28.813738] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:15:31.375 [2024-11-18 11:58:28.814189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72145 ] 00:15:31.375 [2024-11-18 11:58:28.970426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.637 [2024-11-18 11:58:29.085086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.207 11:58:29 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:32.207 11:58:29 ftl -- common/autotest_common.sh@866 -- # return 0 00:15:32.207 11:58:29 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:32.207 11:58:29 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:33.142 11:58:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:33.142 11:58:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:33.402 11:58:30 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:33.402 11:58:30 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:33.402 11:58:30 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@50 -- # break 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@63 -- # break 00:15:33.664 11:58:31 ftl -- ftl/ftl.sh@66 -- # killprocess 72145 00:15:33.664 11:58:31 ftl -- common/autotest_common.sh@952 -- # '[' -z 72145 ']' 00:15:33.664 11:58:31 ftl -- common/autotest_common.sh@956 -- # kill -0 72145 00:15:33.664 11:58:31 ftl -- common/autotest_common.sh@957 -- # uname 00:15:33.664 11:58:31 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:33.664 11:58:31 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72145 00:15:33.927 killing process with pid 72145 00:15:33.927 11:58:31 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:33.927 11:58:31 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:33.927 11:58:31 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72145' 00:15:33.927 11:58:31 ftl -- common/autotest_common.sh@971 -- # kill 72145 00:15:33.927 11:58:31 ftl -- common/autotest_common.sh@976 -- # wait 72145 00:15:35.307 11:58:32 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:35.307 11:58:32 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:35.307 11:58:32 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:35.307 11:58:32 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:35.307 11:58:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:35.307 ************************************ 00:15:35.307 START TEST ftl_fio_basic 00:15:35.307 ************************************ 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:35.307 * Looking for test storage... 00:15:35.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.307 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.308 --rc genhtml_branch_coverage=1 00:15:35.308 --rc genhtml_function_coverage=1 00:15:35.308 --rc genhtml_legend=1 00:15:35.308 --rc geninfo_all_blocks=1 00:15:35.308 --rc geninfo_unexecuted_blocks=1 00:15:35.308 00:15:35.308 ' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:35.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.308 --rc genhtml_branch_coverage=1 00:15:35.308 --rc genhtml_function_coverage=1 00:15:35.308 --rc genhtml_legend=1 00:15:35.308 --rc geninfo_all_blocks=1 00:15:35.308 --rc geninfo_unexecuted_blocks=1 00:15:35.308 00:15:35.308 ' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:35.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.308 --rc genhtml_branch_coverage=1 00:15:35.308 --rc genhtml_function_coverage=1 00:15:35.308 --rc genhtml_legend=1 00:15:35.308 --rc geninfo_all_blocks=1 00:15:35.308 --rc geninfo_unexecuted_blocks=1 00:15:35.308 00:15:35.308 ' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:35.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.308 --rc genhtml_branch_coverage=1 00:15:35.308 --rc genhtml_function_coverage=1 00:15:35.308 --rc genhtml_legend=1 00:15:35.308 --rc geninfo_all_blocks=1 00:15:35.308 --rc geninfo_unexecuted_blocks=1 00:15:35.308 00:15:35.308 ' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72272 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72272 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72272 ']' 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:35.308 11:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.308 [2024-11-18 11:58:32.873480] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:15:35.308 [2024-11-18 11:58:32.873792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72272 ] 00:15:35.567 [2024-11-18 11:58:33.030461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:35.567 [2024-11-18 11:58:33.116908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.567 [2024-11-18 11:58:33.117082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.568 [2024-11-18 11:58:33.117112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.136 11:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.136 11:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:15:36.136 11:58:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:36.136 11:58:33 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:36.136 11:58:33 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:36.136 11:58:33 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:36.136 11:58:33 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:36.136 11:58:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:36.395 11:58:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:36.395 11:58:33 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:36.395 11:58:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:36.395 11:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:15:36.395 11:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:36.395 11:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:36.395 11:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:36.395 11:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:36.654 { 00:15:36.654 "name": "nvme0n1", 00:15:36.654 "aliases": [ 00:15:36.654 "5289619a-81ea-4266-b110-adea0e0fb61c" 00:15:36.654 ], 00:15:36.654 "product_name": "NVMe disk", 00:15:36.654 "block_size": 4096, 00:15:36.654 "num_blocks": 1310720, 00:15:36.654 "uuid": "5289619a-81ea-4266-b110-adea0e0fb61c", 00:15:36.654 "numa_id": -1, 00:15:36.654 "assigned_rate_limits": { 00:15:36.654 "rw_ios_per_sec": 0, 00:15:36.654 "rw_mbytes_per_sec": 0, 00:15:36.654 "r_mbytes_per_sec": 0, 00:15:36.654 "w_mbytes_per_sec": 0 00:15:36.654 }, 00:15:36.654 "claimed": false, 00:15:36.654 "zoned": false, 00:15:36.654 "supported_io_types": { 00:15:36.654 "read": true, 00:15:36.654 "write": true, 00:15:36.654 "unmap": true, 00:15:36.654 "flush": true, 00:15:36.654 "reset": true, 00:15:36.654 "nvme_admin": true, 00:15:36.654 "nvme_io": true, 00:15:36.654 "nvme_io_md": false, 00:15:36.654 "write_zeroes": true, 00:15:36.654 "zcopy": false, 00:15:36.654 "get_zone_info": false, 00:15:36.654 "zone_management": false, 00:15:36.654 "zone_append": false, 00:15:36.654 "compare": true, 00:15:36.654 "compare_and_write": false, 00:15:36.654 "abort": true, 00:15:36.654 "seek_hole": false, 00:15:36.654 "seek_data": false, 00:15:36.654 "copy": true, 00:15:36.654 "nvme_iov_md": false 00:15:36.654 }, 00:15:36.654 "driver_specific": { 00:15:36.654 "nvme": [ 00:15:36.654 { 00:15:36.654 "pci_address": "0000:00:11.0", 00:15:36.654 "trid": { 00:15:36.654 "trtype": "PCIe", 00:15:36.654 "traddr": "0000:00:11.0" 00:15:36.654 }, 00:15:36.654 "ctrlr_data": { 00:15:36.654 "cntlid": 0, 00:15:36.654 "vendor_id": "0x1b36", 00:15:36.654 "model_number": "QEMU NVMe Ctrl", 00:15:36.654 "serial_number": "12341", 00:15:36.654 "firmware_revision": "8.0.0", 00:15:36.654 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:36.654 "oacs": { 00:15:36.654 "security": 0, 00:15:36.654 "format": 1, 00:15:36.654 "firmware": 0, 00:15:36.654 "ns_manage": 1 00:15:36.654 }, 00:15:36.654 "multi_ctrlr": false, 00:15:36.654 "ana_reporting": false 00:15:36.654 }, 00:15:36.654 "vs": { 00:15:36.654 "nvme_version": "1.4" 00:15:36.654 }, 00:15:36.654 "ns_data": { 00:15:36.654 "id": 1, 00:15:36.654 "can_share": false 00:15:36.654 } 00:15:36.654 } 00:15:36.654 ], 00:15:36.654 "mp_policy": "active_passive" 00:15:36.654 } 00:15:36.654 } 00:15:36.654 ]' 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:36.654 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:36.913 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:36.913 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:36.913 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=f7de47bd-5a0d-43c8-a1e2-636b1a533250 00:15:36.913 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f7de47bd-5a0d-43c8-a1e2-636b1a533250 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:37.173 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.431 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:37.431 { 00:15:37.431 "name": "5b06aef9-9fca-4a35-84e7-a92482a0b0d6", 00:15:37.431 "aliases": [ 00:15:37.431 "lvs/nvme0n1p0" 00:15:37.431 ], 00:15:37.431 "product_name": "Logical Volume", 00:15:37.431 "block_size": 4096, 00:15:37.431 "num_blocks": 26476544, 00:15:37.431 "uuid": "5b06aef9-9fca-4a35-84e7-a92482a0b0d6", 00:15:37.431 "assigned_rate_limits": { 00:15:37.431 "rw_ios_per_sec": 0, 00:15:37.431 "rw_mbytes_per_sec": 0, 00:15:37.431 "r_mbytes_per_sec": 0, 00:15:37.431 "w_mbytes_per_sec": 0 00:15:37.431 }, 00:15:37.431 "claimed": false, 00:15:37.431 "zoned": false, 00:15:37.431 "supported_io_types": { 00:15:37.431 "read": true, 00:15:37.431 "write": true, 00:15:37.431 "unmap": true, 00:15:37.431 "flush": false, 00:15:37.431 "reset": true, 00:15:37.431 "nvme_admin": false, 00:15:37.431 "nvme_io": false, 00:15:37.431 "nvme_io_md": false, 00:15:37.431 "write_zeroes": true, 00:15:37.431 "zcopy": false, 00:15:37.431 "get_zone_info": false, 00:15:37.431 "zone_management": false, 00:15:37.431 "zone_append": false, 00:15:37.431 "compare": false, 00:15:37.431 "compare_and_write": false, 00:15:37.431 "abort": false, 00:15:37.432 "seek_hole": true, 00:15:37.432 "seek_data": true, 00:15:37.432 "copy": false, 00:15:37.432 "nvme_iov_md": false 00:15:37.432 }, 00:15:37.432 "driver_specific": { 00:15:37.432 "lvol": { 00:15:37.432 "lvol_store_uuid": "f7de47bd-5a0d-43c8-a1e2-636b1a533250", 00:15:37.432 "base_bdev": "nvme0n1", 00:15:37.432 "thin_provision": true, 00:15:37.432 "num_allocated_clusters": 0, 00:15:37.432 "snapshot": false, 00:15:37.432 "clone": false, 00:15:37.432 "esnap_clone": false 00:15:37.432 } 00:15:37.432 } 00:15:37.432 } 00:15:37.432 ]' 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:37.432 11:58:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:37.690 11:58:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:37.690 11:58:35 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:37.690 11:58:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.690 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.690 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:37.690 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:37.690 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:37.690 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:37.949 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:37.949 { 00:15:37.949 "name": "5b06aef9-9fca-4a35-84e7-a92482a0b0d6", 00:15:37.949 "aliases": [ 00:15:37.949 "lvs/nvme0n1p0" 00:15:37.949 ], 00:15:37.949 "product_name": "Logical Volume", 00:15:37.949 "block_size": 4096, 00:15:37.949 "num_blocks": 26476544, 00:15:37.949 "uuid": "5b06aef9-9fca-4a35-84e7-a92482a0b0d6", 00:15:37.949 "assigned_rate_limits": { 00:15:37.949 "rw_ios_per_sec": 0, 00:15:37.949 "rw_mbytes_per_sec": 0, 00:15:37.949 "r_mbytes_per_sec": 0, 00:15:37.949 "w_mbytes_per_sec": 0 00:15:37.950 }, 00:15:37.950 "claimed": false, 00:15:37.950 "zoned": false, 00:15:37.950 "supported_io_types": { 00:15:37.950 "read": true, 00:15:37.950 "write": true, 00:15:37.950 "unmap": true, 00:15:37.950 "flush": false, 00:15:37.950 "reset": true, 00:15:37.950 "nvme_admin": false, 00:15:37.950 "nvme_io": false, 00:15:37.950 "nvme_io_md": false, 00:15:37.950 "write_zeroes": true, 00:15:37.950 "zcopy": false, 00:15:37.950 "get_zone_info": false, 00:15:37.950 "zone_management": false, 00:15:37.950 "zone_append": false, 00:15:37.950 "compare": false, 00:15:37.950 "compare_and_write": false, 00:15:37.950 "abort": false, 00:15:37.950 "seek_hole": true, 00:15:37.950 "seek_data": true, 00:15:37.950 "copy": false, 00:15:37.950 "nvme_iov_md": false 00:15:37.950 }, 00:15:37.950 "driver_specific": { 00:15:37.950 "lvol": { 00:15:37.950 "lvol_store_uuid": "f7de47bd-5a0d-43c8-a1e2-636b1a533250", 00:15:37.950 "base_bdev": "nvme0n1", 00:15:37.950 "thin_provision": true, 00:15:37.950 "num_allocated_clusters": 0, 00:15:37.950 "snapshot": false, 00:15:37.950 "clone": false, 00:15:37.950 "esnap_clone": false 00:15:37.950 } 00:15:37.950 } 00:15:37.950 } 00:15:37.950 ]' 00:15:37.950 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:37.950 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:37.950 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:37.950 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:37.950 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:37.950 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:37.950 11:58:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:37.950 11:58:35 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:38.209 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b06aef9-9fca-4a35-84e7-a92482a0b0d6 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:38.209 { 00:15:38.209 "name": "5b06aef9-9fca-4a35-84e7-a92482a0b0d6", 00:15:38.209 "aliases": [ 00:15:38.209 "lvs/nvme0n1p0" 00:15:38.209 ], 00:15:38.209 "product_name": "Logical Volume", 00:15:38.209 "block_size": 4096, 00:15:38.209 "num_blocks": 26476544, 00:15:38.209 "uuid": "5b06aef9-9fca-4a35-84e7-a92482a0b0d6", 00:15:38.209 "assigned_rate_limits": { 00:15:38.209 "rw_ios_per_sec": 0, 00:15:38.209 "rw_mbytes_per_sec": 0, 00:15:38.209 "r_mbytes_per_sec": 0, 00:15:38.209 "w_mbytes_per_sec": 0 00:15:38.209 }, 00:15:38.209 "claimed": false, 00:15:38.209 "zoned": false, 00:15:38.209 "supported_io_types": { 00:15:38.209 "read": true, 00:15:38.209 "write": true, 00:15:38.209 "unmap": true, 00:15:38.209 "flush": false, 00:15:38.209 "reset": true, 00:15:38.209 "nvme_admin": false, 00:15:38.209 "nvme_io": false, 00:15:38.209 "nvme_io_md": false, 00:15:38.209 "write_zeroes": true, 00:15:38.209 "zcopy": false, 00:15:38.209 "get_zone_info": false, 00:15:38.209 "zone_management": false, 00:15:38.209 "zone_append": false, 00:15:38.209 "compare": false, 00:15:38.209 "compare_and_write": false, 00:15:38.209 "abort": false, 00:15:38.209 "seek_hole": true, 00:15:38.209 "seek_data": true, 00:15:38.209 "copy": false, 00:15:38.209 "nvme_iov_md": false 00:15:38.209 }, 00:15:38.209 "driver_specific": { 00:15:38.209 "lvol": { 00:15:38.209 "lvol_store_uuid": "f7de47bd-5a0d-43c8-a1e2-636b1a533250", 00:15:38.209 "base_bdev": "nvme0n1", 00:15:38.209 "thin_provision": true, 00:15:38.209 "num_allocated_clusters": 0, 00:15:38.209 "snapshot": false, 00:15:38.209 "clone": false, 00:15:38.209 "esnap_clone": false 00:15:38.209 } 00:15:38.209 } 00:15:38.209 } 00:15:38.209 ]' 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:38.209 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:38.469 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:38.469 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:38.469 11:58:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:38.469 11:58:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:38.469 11:58:35 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:38.469 11:58:35 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5b06aef9-9fca-4a35-84e7-a92482a0b0d6 -c nvc0n1p0 --l2p_dram_limit 60 00:15:38.469 [2024-11-18 11:58:36.113027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.113151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:38.469 [2024-11-18 11:58:36.113208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:38.469 [2024-11-18 11:58:36.113228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.113299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.113318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:38.469 [2024-11-18 11:58:36.113336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:38.469 [2024-11-18 11:58:36.113343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.113387] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:38.469 [2024-11-18 11:58:36.114007] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:38.469 [2024-11-18 11:58:36.114028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.114035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:38.469 [2024-11-18 11:58:36.114043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:15:38.469 [2024-11-18 11:58:36.114049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.114106] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cdbe34ee-4005-4a18-a272-5aa3f38ab266 00:15:38.469 [2024-11-18 11:58:36.115112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.115200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:38.469 [2024-11-18 11:58:36.115212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:38.469 [2024-11-18 11:58:36.115220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.120373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.120399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:38.469 [2024-11-18 11:58:36.120406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:15:38.469 [2024-11-18 11:58:36.120417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.120495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.120503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:38.469 [2024-11-18 11:58:36.120510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:15:38.469 [2024-11-18 11:58:36.120520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.120569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.120578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:38.469 [2024-11-18 11:58:36.120595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:38.469 [2024-11-18 11:58:36.120603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.120627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:38.469 [2024-11-18 11:58:36.123529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.123552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:38.469 [2024-11-18 11:58:36.123563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.906 ms 00:15:38.469 [2024-11-18 11:58:36.123571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.123618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.123625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:38.469 [2024-11-18 11:58:36.123633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:38.469 [2024-11-18 11:58:36.123638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.123669] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:38.469 [2024-11-18 11:58:36.123785] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:38.469 [2024-11-18 11:58:36.123797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:38.469 [2024-11-18 11:58:36.123805] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:38.469 [2024-11-18 11:58:36.123815] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:38.469 [2024-11-18 11:58:36.123822] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:38.469 [2024-11-18 11:58:36.123830] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:38.469 [2024-11-18 11:58:36.123836] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:38.469 [2024-11-18 11:58:36.123843] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:38.469 [2024-11-18 11:58:36.123848] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:38.469 [2024-11-18 11:58:36.123857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.123862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:38.469 [2024-11-18 11:58:36.123871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:15:38.469 [2024-11-18 11:58:36.123877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.123951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.469 [2024-11-18 11:58:36.123957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:38.469 [2024-11-18 11:58:36.123964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:38.469 [2024-11-18 11:58:36.123971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.469 [2024-11-18 11:58:36.124063] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:38.469 [2024-11-18 11:58:36.124072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:38.469 [2024-11-18 11:58:36.124080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:38.469 [2024-11-18 11:58:36.124086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.469 [2024-11-18 11:58:36.124093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:38.469 [2024-11-18 11:58:36.124097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:38.469 [2024-11-18 11:58:36.124104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:38.469 [2024-11-18 11:58:36.124109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:38.469 [2024-11-18 11:58:36.124115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:38.469 [2024-11-18 11:58:36.124120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:38.469 [2024-11-18 11:58:36.124126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:38.469 [2024-11-18 11:58:36.124131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:38.469 [2024-11-18 11:58:36.124137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:38.469 [2024-11-18 11:58:36.124142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:38.470 [2024-11-18 11:58:36.124149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:38.470 [2024-11-18 11:58:36.124154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:38.470 [2024-11-18 11:58:36.124167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:38.470 [2024-11-18 11:58:36.124173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:38.470 [2024-11-18 11:58:36.124184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:38.470 [2024-11-18 11:58:36.124196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:38.470 [2024-11-18 11:58:36.124200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:38.470 [2024-11-18 11:58:36.124211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:38.470 [2024-11-18 11:58:36.124217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:38.470 [2024-11-18 11:58:36.124228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:38.470 [2024-11-18 11:58:36.124233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:38.470 [2024-11-18 11:58:36.124247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:38.470 [2024-11-18 11:58:36.124255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:38.470 [2024-11-18 11:58:36.124266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:38.470 [2024-11-18 11:58:36.124281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:38.470 [2024-11-18 11:58:36.124287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:38.470 [2024-11-18 11:58:36.124292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:38.470 [2024-11-18 11:58:36.124298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:38.470 [2024-11-18 11:58:36.124303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:38.470 [2024-11-18 11:58:36.124314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:38.470 [2024-11-18 11:58:36.124321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124326] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:38.470 [2024-11-18 11:58:36.124333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:38.470 [2024-11-18 11:58:36.124338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:38.470 [2024-11-18 11:58:36.124345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:38.470 [2024-11-18 11:58:36.124350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:38.470 [2024-11-18 11:58:36.124358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:38.470 [2024-11-18 11:58:36.124363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:38.470 [2024-11-18 11:58:36.124369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:38.470 [2024-11-18 11:58:36.124374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:38.470 [2024-11-18 11:58:36.124380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:38.470 [2024-11-18 11:58:36.124387] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:38.470 [2024-11-18 11:58:36.124396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:38.470 [2024-11-18 11:58:36.124402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:38.470 [2024-11-18 11:58:36.124410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:38.470 [2024-11-18 11:58:36.124415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:38.470 [2024-11-18 11:58:36.124423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:38.470 [2024-11-18 11:58:36.124428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:38.470 [2024-11-18 11:58:36.124434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:38.470 [2024-11-18 11:58:36.124440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:38.470 [2024-11-18 11:58:36.124448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:38.470 [2024-11-18 11:58:36.124453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:38.470 [2024-11-18 11:58:36.124461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:38.470 [2024-11-18 11:58:36.124467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:38.470 [2024-11-18 11:58:36.124474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:38.470 [2024-11-18 11:58:36.124479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:38.470 [2024-11-18 11:58:36.124486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:38.470 [2024-11-18 11:58:36.124492] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:38.470 [2024-11-18 11:58:36.124500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:38.470 [2024-11-18 11:58:36.124507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:38.470 [2024-11-18 11:58:36.124514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:38.470 [2024-11-18 11:58:36.124519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:38.470 [2024-11-18 11:58:36.124526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:38.470 [2024-11-18 11:58:36.124531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:38.470 [2024-11-18 11:58:36.124539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:38.470 [2024-11-18 11:58:36.124544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:15:38.470 [2024-11-18 11:58:36.124551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:38.470 [2024-11-18 11:58:36.124634] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:38.470 [2024-11-18 11:58:36.124645] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:41.750 [2024-11-18 11:58:38.821489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.821733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:41.750 [2024-11-18 11:58:38.821754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2696.842 ms 00:15:41.750 [2024-11-18 11:58:38.821766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.847260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.847301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:41.750 [2024-11-18 11:58:38.847313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.288 ms 00:15:41.750 [2024-11-18 11:58:38.847323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.847450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.847463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:41.750 [2024-11-18 11:58:38.847472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:41.750 [2024-11-18 11:58:38.847483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.886937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.887107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:41.750 [2024-11-18 11:58:38.887129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.406 ms 00:15:41.750 [2024-11-18 11:58:38.887142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.887190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.887202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:41.750 [2024-11-18 11:58:38.887212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:41.750 [2024-11-18 11:58:38.887222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.887638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.887659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:41.750 [2024-11-18 11:58:38.887672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:15:41.750 [2024-11-18 11:58:38.887683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.887827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.887840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:41.750 [2024-11-18 11:58:38.887850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:15:41.750 [2024-11-18 11:58:38.887862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.903555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.903600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:41.750 [2024-11-18 11:58:38.903610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.665 ms 00:15:41.750 [2024-11-18 11:58:38.903620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.914990] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:41.750 [2024-11-18 11:58:38.929284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.929325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:41.750 [2024-11-18 11:58:38.929339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.557 ms 00:15:41.750 [2024-11-18 11:58:38.929346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.974126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.974159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:41.750 [2024-11-18 11:58:38.974174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.745 ms 00:15:41.750 [2024-11-18 11:58:38.974182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.974364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.974374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:41.750 [2024-11-18 11:58:38.974386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:15:41.750 [2024-11-18 11:58:38.974394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:38.996847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:38.996983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:41.750 [2024-11-18 11:58:38.997003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.394 ms 00:15:41.750 [2024-11-18 11:58:38.997011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.750 [2024-11-18 11:58:39.019280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.750 [2024-11-18 11:58:39.019309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:41.750 [2024-11-18 11:58:39.019322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.214 ms 00:15:41.750 [2024-11-18 11:58:39.019335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.751 [2024-11-18 11:58:39.019949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.751 [2024-11-18 11:58:39.019979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:41.751 [2024-11-18 11:58:39.019990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:15:41.751 [2024-11-18 11:58:39.019997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.751 [2024-11-18 11:58:39.081700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.751 [2024-11-18 11:58:39.081730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:41.751 [2024-11-18 11:58:39.081746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.656 ms 00:15:41.751 [2024-11-18 11:58:39.081755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.751 [2024-11-18 11:58:39.105744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.751 [2024-11-18 11:58:39.105773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:41.751 [2024-11-18 11:58:39.105786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.899 ms 00:15:41.751 [2024-11-18 11:58:39.105793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.751 [2024-11-18 11:58:39.128224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.751 [2024-11-18 11:58:39.128251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:41.751 [2024-11-18 11:58:39.128264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.381 ms 00:15:41.751 [2024-11-18 11:58:39.128271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.751 [2024-11-18 11:58:39.151052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.751 [2024-11-18 11:58:39.151174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:41.751 [2024-11-18 11:58:39.151193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.739 ms 00:15:41.751 [2024-11-18 11:58:39.151200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.751 [2024-11-18 11:58:39.151245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.751 [2024-11-18 11:58:39.151253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:41.751 [2024-11-18 11:58:39.151267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:41.751 [2024-11-18 11:58:39.151274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.751 [2024-11-18 11:58:39.151364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.751 [2024-11-18 11:58:39.151373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:41.751 [2024-11-18 11:58:39.151383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:41.751 [2024-11-18 11:58:39.151390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.751 [2024-11-18 11:58:39.152274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3038.829 ms, result 0 00:15:41.751 { 00:15:41.751 "name": "ftl0", 00:15:41.751 "uuid": "cdbe34ee-4005-4a18-a272-5aa3f38ab266" 00:15:41.751 } 00:15:41.751 11:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:41.751 11:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:15:41.751 11:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.751 11:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:15:41.751 11:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.751 11:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.751 11:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:41.751 11:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:42.009 [ 00:15:42.009 { 00:15:42.009 "name": "ftl0", 00:15:42.009 "aliases": [ 00:15:42.009 "cdbe34ee-4005-4a18-a272-5aa3f38ab266" 00:15:42.009 ], 00:15:42.009 "product_name": "FTL disk", 00:15:42.009 "block_size": 4096, 00:15:42.009 "num_blocks": 20971520, 00:15:42.009 "uuid": "cdbe34ee-4005-4a18-a272-5aa3f38ab266", 00:15:42.009 "assigned_rate_limits": { 00:15:42.009 "rw_ios_per_sec": 0, 00:15:42.009 "rw_mbytes_per_sec": 0, 00:15:42.009 "r_mbytes_per_sec": 0, 00:15:42.009 "w_mbytes_per_sec": 0 00:15:42.009 }, 00:15:42.009 "claimed": false, 00:15:42.009 "zoned": false, 00:15:42.009 "supported_io_types": { 00:15:42.009 "read": true, 00:15:42.009 "write": true, 00:15:42.009 "unmap": true, 00:15:42.009 "flush": true, 00:15:42.009 "reset": false, 00:15:42.009 "nvme_admin": false, 00:15:42.009 "nvme_io": false, 00:15:42.009 "nvme_io_md": false, 00:15:42.009 "write_zeroes": true, 00:15:42.009 "zcopy": false, 00:15:42.009 "get_zone_info": false, 00:15:42.009 "zone_management": false, 00:15:42.009 "zone_append": false, 00:15:42.009 "compare": false, 00:15:42.009 "compare_and_write": false, 00:15:42.009 "abort": false, 00:15:42.009 "seek_hole": false, 00:15:42.009 "seek_data": false, 00:15:42.009 "copy": false, 00:15:42.009 "nvme_iov_md": false 00:15:42.009 }, 00:15:42.009 "driver_specific": { 00:15:42.009 "ftl": { 00:15:42.009 "base_bdev": "5b06aef9-9fca-4a35-84e7-a92482a0b0d6", 00:15:42.009 "cache": "nvc0n1p0" 00:15:42.009 } 00:15:42.009 } 00:15:42.009 } 00:15:42.009 ] 00:15:42.009 11:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:15:42.009 11:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:42.009 11:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:42.267 11:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:42.267 11:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:42.267 [2024-11-18 11:58:39.961443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.267 [2024-11-18 11:58:39.961484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:42.267 [2024-11-18 11:58:39.961495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:42.267 [2024-11-18 11:58:39.961505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.267 [2024-11-18 11:58:39.961537] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:42.267 [2024-11-18 11:58:39.964195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.267 [2024-11-18 11:58:39.964222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:42.267 [2024-11-18 11:58:39.964236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:15:42.267 [2024-11-18 11:58:39.964245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.267 [2024-11-18 11:58:39.964745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:39.964852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:42.528 [2024-11-18 11:58:39.964869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:15:42.528 [2024-11-18 11:58:39.964876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:39.968114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:39.968194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:42.528 [2024-11-18 11:58:39.968209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:15:42.528 [2024-11-18 11:58:39.968216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:39.974401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:39.974426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:42.528 [2024-11-18 11:58:39.974438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.152 ms 00:15:42.528 [2024-11-18 11:58:39.974446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:39.997794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:39.997823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:42.528 [2024-11-18 11:58:39.997835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.260 ms 00:15:42.528 [2024-11-18 11:58:39.997842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:40.012086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:40.012116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:42.528 [2024-11-18 11:58:40.012132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.185 ms 00:15:42.528 [2024-11-18 11:58:40.012141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:40.012337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:40.012348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:42.528 [2024-11-18 11:58:40.012358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:15:42.528 [2024-11-18 11:58:40.012365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:40.034810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:40.034920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:42.528 [2024-11-18 11:58:40.034937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.414 ms 00:15:42.528 [2024-11-18 11:58:40.034944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:40.057704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:40.057828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:42.528 [2024-11-18 11:58:40.057850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.476 ms 00:15:42.528 [2024-11-18 11:58:40.057858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:40.079857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:40.079887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:42.528 [2024-11-18 11:58:40.079899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.954 ms 00:15:42.528 [2024-11-18 11:58:40.079906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:40.102127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.528 [2024-11-18 11:58:40.102155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:42.528 [2024-11-18 11:58:40.102167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.114 ms 00:15:42.528 [2024-11-18 11:58:40.102174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.528 [2024-11-18 11:58:40.102223] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:42.528 [2024-11-18 11:58:40.102235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:42.528 [2024-11-18 11:58:40.102689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.102999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:42.529 [2024-11-18 11:58:40.103105] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:42.529 [2024-11-18 11:58:40.103114] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cdbe34ee-4005-4a18-a272-5aa3f38ab266 00:15:42.529 [2024-11-18 11:58:40.103122] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:42.529 [2024-11-18 11:58:40.103132] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:42.529 [2024-11-18 11:58:40.103139] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:42.529 [2024-11-18 11:58:40.103149] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:42.529 [2024-11-18 11:58:40.103156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:42.529 [2024-11-18 11:58:40.103164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:42.529 [2024-11-18 11:58:40.103171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:42.529 [2024-11-18 11:58:40.103179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:42.529 [2024-11-18 11:58:40.103185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:42.529 [2024-11-18 11:58:40.103193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.529 [2024-11-18 11:58:40.103201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:42.529 [2024-11-18 11:58:40.103212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:15:42.529 [2024-11-18 11:58:40.103219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.529 [2024-11-18 11:58:40.115763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.529 [2024-11-18 11:58:40.115790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:42.529 [2024-11-18 11:58:40.115802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.507 ms 00:15:42.529 [2024-11-18 11:58:40.115810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.529 [2024-11-18 11:58:40.116155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.529 [2024-11-18 11:58:40.116163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:42.529 [2024-11-18 11:58:40.116173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:15:42.529 [2024-11-18 11:58:40.116179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.529 [2024-11-18 11:58:40.159837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.529 [2024-11-18 11:58:40.159868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.529 [2024-11-18 11:58:40.159879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.529 [2024-11-18 11:58:40.159886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.529 [2024-11-18 11:58:40.159951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.529 [2024-11-18 11:58:40.159959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.529 [2024-11-18 11:58:40.159968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.529 [2024-11-18 11:58:40.159975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.529 [2024-11-18 11:58:40.160055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.529 [2024-11-18 11:58:40.160067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.529 [2024-11-18 11:58:40.160076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.529 [2024-11-18 11:58:40.160083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.529 [2024-11-18 11:58:40.160116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.529 [2024-11-18 11:58:40.160123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.529 [2024-11-18 11:58:40.160132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.529 [2024-11-18 11:58:40.160139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.240673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.788 [2024-11-18 11:58:40.240830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.788 [2024-11-18 11:58:40.240849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.788 [2024-11-18 11:58:40.240857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.302888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.788 [2024-11-18 11:58:40.302926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.788 [2024-11-18 11:58:40.302939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.788 [2024-11-18 11:58:40.302947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.303034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.788 [2024-11-18 11:58:40.303044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:42.788 [2024-11-18 11:58:40.303055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.788 [2024-11-18 11:58:40.303063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.303138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.788 [2024-11-18 11:58:40.303148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:42.788 [2024-11-18 11:58:40.303157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.788 [2024-11-18 11:58:40.303164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.303268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.788 [2024-11-18 11:58:40.303277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:42.788 [2024-11-18 11:58:40.303287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.788 [2024-11-18 11:58:40.303298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.303359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.788 [2024-11-18 11:58:40.303368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:42.788 [2024-11-18 11:58:40.303378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.788 [2024-11-18 11:58:40.303385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.303445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.788 [2024-11-18 11:58:40.303453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:42.788 [2024-11-18 11:58:40.303463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.788 [2024-11-18 11:58:40.303472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.303529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.788 [2024-11-18 11:58:40.303538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:42.788 [2024-11-18 11:58:40.303547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.788 [2024-11-18 11:58:40.303555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.788 [2024-11-18 11:58:40.303750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.275 ms, result 0 00:15:42.788 true 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72272 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72272 ']' 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72272 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72272 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72272' 00:15:42.788 killing process with pid 72272 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72272 00:15:42.788 11:58:40 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72272 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:49.349 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:49.349 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:49.349 fio-3.35 00:15:49.349 Starting 1 thread 00:15:53.554 00:15:53.554 test: (groupid=0, jobs=1): err= 0: pid=72456: Mon Nov 18 11:58:50 2024 00:15:53.554 read: IOPS=1098, BW=73.0MiB/s (76.5MB/s)(255MiB/3489msec) 00:15:53.554 slat (nsec): min=2894, max=44961, avg=4529.00, stdev=2404.62 00:15:53.554 clat (usec): min=245, max=15468, avg=412.27, stdev=281.06 00:15:53.554 lat (usec): min=262, max=15471, avg=416.80, stdev=281.44 00:15:53.554 clat percentiles (usec): 00:15:53.554 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 310], 20.00th=[ 314], 00:15:53.554 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 383], 00:15:53.554 | 70.00th=[ 461], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 750], 00:15:53.554 | 99.00th=[ 881], 99.50th=[ 930], 99.90th=[ 1012], 99.95th=[ 1037], 00:15:53.554 | 99.99th=[15533] 00:15:53.554 write: IOPS=1107, BW=73.5MiB/s (77.1MB/s)(256MiB/3483msec); 0 zone resets 00:15:53.554 slat (nsec): min=13187, max=84542, avg=18611.69, stdev=4862.13 00:15:53.554 clat (usec): min=261, max=2785, avg=458.84, stdev=196.36 00:15:53.554 lat (usec): min=281, max=2816, avg=477.46, stdev=197.84 00:15:53.554 clat percentiles (usec): 00:15:53.554 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 334], 20.00th=[ 338], 00:15:53.554 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 408], 00:15:53.554 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 644], 95.00th=[ 873], 00:15:53.554 | 99.00th=[ 1045], 99.50th=[ 1254], 99.90th=[ 2180], 99.95th=[ 2474], 00:15:53.554 | 99.99th=[ 2769] 00:15:53.554 bw ( KiB/s): min=52632, max=95064, per=100.00%, avg=75480.00, stdev=17101.43, samples=6 00:15:53.554 iops : min= 774, max= 1398, avg=1110.00, stdev=251.49, samples=6 00:15:53.554 lat (usec) : 250=0.01%, 500=70.80%, 750=23.07%, 1000=5.37% 00:15:53.554 lat (msec) : 2=0.66%, 4=0.07%, 20=0.01% 00:15:53.554 cpu : usr=99.23%, sys=0.11%, ctx=6, majf=0, minf=1169 00:15:53.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:53.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.554 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:53.554 00:15:53.554 Run status group 0 (all jobs): 00:15:53.554 READ: bw=73.0MiB/s (76.5MB/s), 73.0MiB/s-73.0MiB/s (76.5MB/s-76.5MB/s), io=255MiB (267MB), run=3489-3489msec 00:15:53.554 WRITE: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=256MiB (269MB), run=3483-3483msec 00:15:54.942 ----------------------------------------------------- 00:15:54.942 Suppressions used: 00:15:54.942 count bytes template 00:15:54.942 1 5 /usr/src/fio/parse.c 00:15:54.942 1 8 libtcmalloc_minimal.so 00:15:54.942 1 904 libcrypto.so 00:15:54.942 ----------------------------------------------------- 00:15:54.942 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:54.942 11:58:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:55.204 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:55.204 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:55.204 fio-3.35 00:15:55.204 Starting 2 threads 00:16:21.749 00:16:21.749 first_half: (groupid=0, jobs=1): err= 0: pid=72559: Mon Nov 18 11:59:16 2024 00:16:21.749 read: IOPS=2970, BW=11.6MiB/s (12.2MB/s)(255MiB/21948msec) 00:16:21.749 slat (nsec): min=2980, max=22295, avg=4755.39, stdev=1166.96 00:16:21.749 clat (usec): min=566, max=266129, avg=32449.37, stdev=15595.10 00:16:21.749 lat (usec): min=571, max=266133, avg=32454.12, stdev=15595.16 00:16:21.749 clat percentiles (msec): 00:16:21.749 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 29], 00:16:21.749 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:21.749 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 36], 95.00th=[ 40], 00:16:21.749 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 215], 99.95th=[ 232], 00:16:21.749 | 99.99th=[ 259] 00:16:21.749 write: IOPS=4251, BW=16.6MiB/s (17.4MB/s)(256MiB/15416msec); 0 zone resets 00:16:21.749 slat (usec): min=3, max=715, avg= 6.02, stdev= 3.97 00:16:21.749 clat (usec): min=338, max=81881, avg=10569.16, stdev=19025.56 00:16:21.749 lat (usec): min=344, max=81890, avg=10575.18, stdev=19025.53 00:16:21.749 clat percentiles (usec): 00:16:21.749 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 898], 20.00th=[ 1106], 00:16:21.749 | 30.00th=[ 1287], 40.00th=[ 1893], 50.00th=[ 3556], 60.00th=[ 4752], 00:16:21.749 | 70.00th=[ 5604], 80.00th=[10421], 90.00th=[55837], 95.00th=[63177], 00:16:21.749 | 99.00th=[72877], 99.50th=[74974], 99.90th=[80217], 99.95th=[81265], 00:16:21.749 | 99.99th=[81265] 00:16:21.749 bw ( KiB/s): min= 256, max=47312, per=91.36%, avg=23831.27, stdev=14927.80, samples=22 00:16:21.749 iops : min= 64, max=11828, avg=5957.82, stdev=3731.95, samples=22 00:16:21.749 lat (usec) : 500=0.02%, 750=2.36%, 1000=4.93% 00:16:21.749 lat (msec) : 2=13.35%, 4=6.19%, 10=13.56%, 20=5.36%, 50=47.17% 00:16:21.749 lat (msec) : 100=6.41%, 250=0.64%, 500=0.01% 00:16:21.749 cpu : usr=99.26%, sys=0.14%, ctx=44, majf=0, minf=5563 00:16:21.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:21.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.749 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.749 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.749 second_half: (groupid=0, jobs=1): err= 0: pid=72560: Mon Nov 18 11:59:16 2024 00:16:21.749 read: IOPS=2954, BW=11.5MiB/s (12.1MB/s)(255MiB/22109msec) 00:16:21.749 slat (usec): min=2, max=516, avg= 4.34, stdev= 3.08 00:16:21.749 clat (usec): min=622, max=272457, avg=31495.27, stdev=13728.44 00:16:21.749 lat (usec): min=626, max=272471, avg=31499.60, stdev=13728.60 00:16:21.749 clat percentiles (msec): 00:16:21.749 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 27], 20.00th=[ 28], 00:16:21.749 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:21.749 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 36], 95.00th=[ 40], 00:16:21.749 | 99.00th=[ 104], 99.50th=[ 128], 99.90th=[ 163], 99.95th=[ 186], 00:16:21.749 | 99.99th=[ 266] 00:16:21.749 write: IOPS=3260, BW=12.7MiB/s (13.4MB/s)(256MiB/20100msec); 0 zone resets 00:16:21.749 slat (usec): min=3, max=573, avg= 6.27, stdev= 4.47 00:16:21.749 clat (usec): min=341, max=82753, avg=11776.56, stdev=19399.93 00:16:21.749 lat (usec): min=350, max=82759, avg=11782.83, stdev=19400.05 00:16:21.749 clat percentiles (usec): 00:16:21.749 | 1.00th=[ 627], 5.00th=[ 717], 10.00th=[ 791], 20.00th=[ 1020], 00:16:21.749 | 30.00th=[ 1352], 40.00th=[ 3654], 50.00th=[ 4817], 60.00th=[ 5342], 00:16:21.749 | 70.00th=[ 8160], 80.00th=[11469], 90.00th=[56886], 95.00th=[64226], 00:16:21.749 | 99.00th=[73925], 99.50th=[76022], 99.90th=[81265], 99.95th=[81265], 00:16:21.749 | 99.99th=[82314] 00:16:21.749 bw ( KiB/s): min= 328, max=41056, per=83.76%, avg=21848.71, stdev=14902.24, samples=24 00:16:21.749 iops : min= 82, max=10264, avg=5462.17, stdev=3725.55, samples=24 00:16:21.749 lat (usec) : 500=0.02%, 750=3.68%, 1000=5.98% 00:16:21.749 lat (msec) : 2=7.29%, 4=4.19%, 10=18.18%, 20=6.37%, 50=47.27% 00:16:21.749 lat (msec) : 100=6.48%, 250=0.54%, 500=0.01% 00:16:21.749 cpu : usr=98.78%, sys=0.35%, ctx=276, majf=0, minf=5540 00:16:21.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:21.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.749 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.749 issued rwts: total=65310,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.749 00:16:21.749 Run status group 0 (all jobs): 00:16:21.749 READ: bw=23.1MiB/s (24.2MB/s), 11.5MiB/s-11.6MiB/s (12.1MB/s-12.2MB/s), io=510MiB (535MB), run=21948-22109msec 00:16:21.749 WRITE: bw=25.5MiB/s (26.7MB/s), 12.7MiB/s-16.6MiB/s (13.4MB/s-17.4MB/s), io=512MiB (537MB), run=15416-20100msec 00:16:21.749 ----------------------------------------------------- 00:16:21.749 Suppressions used: 00:16:21.749 count bytes template 00:16:21.749 2 10 /usr/src/fio/parse.c 00:16:21.749 2 192 /usr/src/fio/iolog.c 00:16:21.749 1 8 libtcmalloc_minimal.so 00:16:21.749 1 904 libcrypto.so 00:16:21.749 ----------------------------------------------------- 00:16:21.749 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:21.749 11:59:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:21.749 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:21.749 fio-3.35 00:16:21.749 Starting 1 thread 00:16:39.898 00:16:39.898 test: (groupid=0, jobs=1): err= 0: pid=72850: Mon Nov 18 11:59:35 2024 00:16:39.898 read: IOPS=6707, BW=26.2MiB/s (27.5MB/s)(255MiB/9721msec) 00:16:39.898 slat (usec): min=3, max=543, avg= 4.71, stdev= 2.39 00:16:39.898 clat (usec): min=1044, max=36493, avg=19074.81, stdev=2696.89 00:16:39.898 lat (usec): min=1052, max=36498, avg=19079.52, stdev=2696.89 00:16:39.898 clat percentiles (usec): 00:16:39.898 | 1.00th=[14615], 5.00th=[15401], 10.00th=[15926], 20.00th=[16909], 00:16:39.898 | 30.00th=[17695], 40.00th=[18220], 50.00th=[19006], 60.00th=[19268], 00:16:39.898 | 70.00th=[20055], 80.00th=[20579], 90.00th=[22152], 95.00th=[24249], 00:16:39.898 | 99.00th=[27395], 99.50th=[28967], 99.90th=[33817], 99.95th=[34341], 00:16:39.898 | 99.99th=[36439] 00:16:39.898 write: IOPS=9810, BW=38.3MiB/s (40.2MB/s)(256MiB/6680msec); 0 zone resets 00:16:39.898 slat (usec): min=4, max=435, avg= 6.20, stdev= 3.99 00:16:39.898 clat (usec): min=556, max=74559, avg=12990.29, stdev=15771.43 00:16:39.898 lat (usec): min=562, max=74564, avg=12996.50, stdev=15771.44 00:16:39.898 clat percentiles (usec): 00:16:39.898 | 1.00th=[ 1139], 5.00th=[ 1450], 10.00th=[ 1647], 20.00th=[ 1942], 00:16:39.898 | 30.00th=[ 2311], 40.00th=[ 3195], 50.00th=[ 7635], 60.00th=[10028], 00:16:39.898 | 70.00th=[12387], 80.00th=[15401], 90.00th=[45876], 95.00th=[49021], 00:16:39.898 | 99.00th=[54789], 99.50th=[56886], 99.90th=[62129], 99.95th=[64750], 00:16:39.898 | 99.99th=[69731] 00:16:39.898 bw ( KiB/s): min=11064, max=62168, per=95.43%, avg=37449.14, stdev=11179.25, samples=14 00:16:39.898 iops : min= 2766, max=15542, avg=9362.29, stdev=2794.81, samples=14 00:16:39.898 lat (usec) : 750=0.01%, 1000=0.17% 00:16:39.898 lat (msec) : 2=10.75%, 4=9.82%, 10=9.19%, 20=47.26%, 50=20.78% 00:16:39.898 lat (msec) : 100=2.03% 00:16:39.898 cpu : usr=99.13%, sys=0.13%, ctx=50, majf=0, minf=5565 00:16:39.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:39.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.898 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:39.898 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:39.898 00:16:39.898 Run status group 0 (all jobs): 00:16:39.898 READ: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=255MiB (267MB), run=9721-9721msec 00:16:39.898 WRITE: bw=38.3MiB/s (40.2MB/s), 38.3MiB/s-38.3MiB/s (40.2MB/s-40.2MB/s), io=256MiB (268MB), run=6680-6680msec 00:16:39.898 ----------------------------------------------------- 00:16:39.898 Suppressions used: 00:16:39.898 count bytes template 00:16:39.898 1 5 /usr/src/fio/parse.c 00:16:39.898 2 192 /usr/src/fio/iolog.c 00:16:39.898 1 8 libtcmalloc_minimal.so 00:16:39.898 1 904 libcrypto.so 00:16:39.898 ----------------------------------------------------- 00:16:39.898 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:39.898 Remove shared memory files 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57133 /dev/shm/spdk_tgt_trace.pid71189 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:39.898 ************************************ 00:16:39.898 END TEST ftl_fio_basic 00:16:39.898 ************************************ 00:16:39.898 00:16:39.898 real 1m4.534s 00:16:39.898 user 2m16.136s 00:16:39.898 sys 0m2.838s 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:39.898 11:59:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:39.898 11:59:37 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:39.898 11:59:37 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:39.898 11:59:37 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:39.898 11:59:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:39.898 ************************************ 00:16:39.898 START TEST ftl_bdevperf 00:16:39.898 ************************************ 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:39.898 * Looking for test storage... 00:16:39.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:39.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.898 --rc genhtml_branch_coverage=1 00:16:39.898 --rc genhtml_function_coverage=1 00:16:39.898 --rc genhtml_legend=1 00:16:39.898 --rc geninfo_all_blocks=1 00:16:39.898 --rc geninfo_unexecuted_blocks=1 00:16:39.898 00:16:39.898 ' 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:39.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.898 --rc genhtml_branch_coverage=1 00:16:39.898 --rc genhtml_function_coverage=1 00:16:39.898 --rc genhtml_legend=1 00:16:39.898 --rc geninfo_all_blocks=1 00:16:39.898 --rc geninfo_unexecuted_blocks=1 00:16:39.898 00:16:39.898 ' 00:16:39.898 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:39.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.899 --rc genhtml_branch_coverage=1 00:16:39.899 --rc genhtml_function_coverage=1 00:16:39.899 --rc genhtml_legend=1 00:16:39.899 --rc geninfo_all_blocks=1 00:16:39.899 --rc geninfo_unexecuted_blocks=1 00:16:39.899 00:16:39.899 ' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.899 --rc genhtml_branch_coverage=1 00:16:39.899 --rc genhtml_function_coverage=1 00:16:39.899 --rc genhtml_legend=1 00:16:39.899 --rc geninfo_all_blocks=1 00:16:39.899 --rc geninfo_unexecuted_blocks=1 00:16:39.899 00:16:39.899 ' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73124 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73124 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73124 ']' 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:39.899 11:59:37 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:39.899 [2024-11-18 11:59:37.454388] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:16:39.899 [2024-11-18 11:59:37.454514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73124 ] 00:16:40.158 [2024-11-18 11:59:37.616422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.158 [2024-11-18 11:59:37.713979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.729 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:40.729 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:16:40.729 11:59:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:40.729 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:40.729 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:40.729 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:40.729 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:40.729 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:40.990 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:40.990 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:40.990 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:40.990 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:40.990 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:40.990 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:40.990 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:40.990 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:41.251 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:41.251 { 00:16:41.251 "name": "nvme0n1", 00:16:41.251 "aliases": [ 00:16:41.251 "ca64679a-aa66-4958-b671-3e872e0b59ed" 00:16:41.251 ], 00:16:41.251 "product_name": "NVMe disk", 00:16:41.251 "block_size": 4096, 00:16:41.251 "num_blocks": 1310720, 00:16:41.251 "uuid": "ca64679a-aa66-4958-b671-3e872e0b59ed", 00:16:41.251 "numa_id": -1, 00:16:41.251 "assigned_rate_limits": { 00:16:41.251 "rw_ios_per_sec": 0, 00:16:41.252 "rw_mbytes_per_sec": 0, 00:16:41.252 "r_mbytes_per_sec": 0, 00:16:41.252 "w_mbytes_per_sec": 0 00:16:41.252 }, 00:16:41.252 "claimed": true, 00:16:41.252 "claim_type": "read_many_write_one", 00:16:41.252 "zoned": false, 00:16:41.252 "supported_io_types": { 00:16:41.252 "read": true, 00:16:41.252 "write": true, 00:16:41.252 "unmap": true, 00:16:41.252 "flush": true, 00:16:41.252 "reset": true, 00:16:41.252 "nvme_admin": true, 00:16:41.252 "nvme_io": true, 00:16:41.252 "nvme_io_md": false, 00:16:41.252 "write_zeroes": true, 00:16:41.252 "zcopy": false, 00:16:41.252 "get_zone_info": false, 00:16:41.252 "zone_management": false, 00:16:41.252 "zone_append": false, 00:16:41.252 "compare": true, 00:16:41.252 "compare_and_write": false, 00:16:41.252 "abort": true, 00:16:41.252 "seek_hole": false, 00:16:41.252 "seek_data": false, 00:16:41.252 "copy": true, 00:16:41.252 "nvme_iov_md": false 00:16:41.252 }, 00:16:41.252 "driver_specific": { 00:16:41.252 "nvme": [ 00:16:41.252 { 00:16:41.252 "pci_address": "0000:00:11.0", 00:16:41.252 "trid": { 00:16:41.252 "trtype": "PCIe", 00:16:41.252 "traddr": "0000:00:11.0" 00:16:41.252 }, 00:16:41.252 "ctrlr_data": { 00:16:41.252 "cntlid": 0, 00:16:41.252 "vendor_id": "0x1b36", 00:16:41.252 "model_number": "QEMU NVMe Ctrl", 00:16:41.252 "serial_number": "12341", 00:16:41.252 "firmware_revision": "8.0.0", 00:16:41.252 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:41.252 "oacs": { 00:16:41.252 "security": 0, 00:16:41.252 "format": 1, 00:16:41.252 "firmware": 0, 00:16:41.252 "ns_manage": 1 00:16:41.252 }, 00:16:41.252 "multi_ctrlr": false, 00:16:41.252 "ana_reporting": false 00:16:41.252 }, 00:16:41.252 "vs": { 00:16:41.252 "nvme_version": "1.4" 00:16:41.252 }, 00:16:41.252 "ns_data": { 00:16:41.252 "id": 1, 00:16:41.252 "can_share": false 00:16:41.252 } 00:16:41.252 } 00:16:41.252 ], 00:16:41.252 "mp_policy": "active_passive" 00:16:41.252 } 00:16:41.252 } 00:16:41.252 ]' 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:41.252 11:59:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:41.513 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=f7de47bd-5a0d-43c8-a1e2-636b1a533250 00:16:41.513 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:41.513 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7de47bd-5a0d-43c8-a1e2-636b1a533250 00:16:41.772 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f9062c08-201a-4ac9-b54f-e26b71000d9a 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f9062c08-201a-4ac9-b54f-e26b71000d9a 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:42.032 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:42.294 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:42.294 { 00:16:42.294 "name": "2a017b2f-849d-4e5b-9ea2-f7b6a8762df8", 00:16:42.294 "aliases": [ 00:16:42.294 "lvs/nvme0n1p0" 00:16:42.294 ], 00:16:42.294 "product_name": "Logical Volume", 00:16:42.294 "block_size": 4096, 00:16:42.294 "num_blocks": 26476544, 00:16:42.294 "uuid": "2a017b2f-849d-4e5b-9ea2-f7b6a8762df8", 00:16:42.294 "assigned_rate_limits": { 00:16:42.294 "rw_ios_per_sec": 0, 00:16:42.294 "rw_mbytes_per_sec": 0, 00:16:42.294 "r_mbytes_per_sec": 0, 00:16:42.294 "w_mbytes_per_sec": 0 00:16:42.294 }, 00:16:42.294 "claimed": false, 00:16:42.294 "zoned": false, 00:16:42.294 "supported_io_types": { 00:16:42.294 "read": true, 00:16:42.294 "write": true, 00:16:42.294 "unmap": true, 00:16:42.294 "flush": false, 00:16:42.294 "reset": true, 00:16:42.294 "nvme_admin": false, 00:16:42.294 "nvme_io": false, 00:16:42.294 "nvme_io_md": false, 00:16:42.294 "write_zeroes": true, 00:16:42.294 "zcopy": false, 00:16:42.294 "get_zone_info": false, 00:16:42.294 "zone_management": false, 00:16:42.294 "zone_append": false, 00:16:42.294 "compare": false, 00:16:42.294 "compare_and_write": false, 00:16:42.294 "abort": false, 00:16:42.294 "seek_hole": true, 00:16:42.294 "seek_data": true, 00:16:42.294 "copy": false, 00:16:42.294 "nvme_iov_md": false 00:16:42.294 }, 00:16:42.294 "driver_specific": { 00:16:42.294 "lvol": { 00:16:42.294 "lvol_store_uuid": "f9062c08-201a-4ac9-b54f-e26b71000d9a", 00:16:42.294 "base_bdev": "nvme0n1", 00:16:42.294 "thin_provision": true, 00:16:42.294 "num_allocated_clusters": 0, 00:16:42.294 "snapshot": false, 00:16:42.294 "clone": false, 00:16:42.294 "esnap_clone": false 00:16:42.294 } 00:16:42.294 } 00:16:42.294 } 00:16:42.294 ]' 00:16:42.294 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:42.294 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:42.294 11:59:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:42.555 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:42.555 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:42.555 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:42.555 11:59:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:42.555 11:59:40 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:42.555 11:59:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:42.816 11:59:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:42.816 11:59:40 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:42.816 11:59:40 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:42.816 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:42.816 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:42.816 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:42.816 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:42.816 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:43.078 { 00:16:43.078 "name": "2a017b2f-849d-4e5b-9ea2-f7b6a8762df8", 00:16:43.078 "aliases": [ 00:16:43.078 "lvs/nvme0n1p0" 00:16:43.078 ], 00:16:43.078 "product_name": "Logical Volume", 00:16:43.078 "block_size": 4096, 00:16:43.078 "num_blocks": 26476544, 00:16:43.078 "uuid": "2a017b2f-849d-4e5b-9ea2-f7b6a8762df8", 00:16:43.078 "assigned_rate_limits": { 00:16:43.078 "rw_ios_per_sec": 0, 00:16:43.078 "rw_mbytes_per_sec": 0, 00:16:43.078 "r_mbytes_per_sec": 0, 00:16:43.078 "w_mbytes_per_sec": 0 00:16:43.078 }, 00:16:43.078 "claimed": false, 00:16:43.078 "zoned": false, 00:16:43.078 "supported_io_types": { 00:16:43.078 "read": true, 00:16:43.078 "write": true, 00:16:43.078 "unmap": true, 00:16:43.078 "flush": false, 00:16:43.078 "reset": true, 00:16:43.078 "nvme_admin": false, 00:16:43.078 "nvme_io": false, 00:16:43.078 "nvme_io_md": false, 00:16:43.078 "write_zeroes": true, 00:16:43.078 "zcopy": false, 00:16:43.078 "get_zone_info": false, 00:16:43.078 "zone_management": false, 00:16:43.078 "zone_append": false, 00:16:43.078 "compare": false, 00:16:43.078 "compare_and_write": false, 00:16:43.078 "abort": false, 00:16:43.078 "seek_hole": true, 00:16:43.078 "seek_data": true, 00:16:43.078 "copy": false, 00:16:43.078 "nvme_iov_md": false 00:16:43.078 }, 00:16:43.078 "driver_specific": { 00:16:43.078 "lvol": { 00:16:43.078 "lvol_store_uuid": "f9062c08-201a-4ac9-b54f-e26b71000d9a", 00:16:43.078 "base_bdev": "nvme0n1", 00:16:43.078 "thin_provision": true, 00:16:43.078 "num_allocated_clusters": 0, 00:16:43.078 "snapshot": false, 00:16:43.078 "clone": false, 00:16:43.078 "esnap_clone": false 00:16:43.078 } 00:16:43.078 } 00:16:43.078 } 00:16:43.078 ]' 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:43.078 11:59:40 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:43.341 11:59:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:43.341 11:59:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:43.341 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:43.341 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:43.341 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:43.341 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:43.341 11:59:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 00:16:43.341 11:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:43.341 { 00:16:43.341 "name": "2a017b2f-849d-4e5b-9ea2-f7b6a8762df8", 00:16:43.341 "aliases": [ 00:16:43.341 "lvs/nvme0n1p0" 00:16:43.341 ], 00:16:43.341 "product_name": "Logical Volume", 00:16:43.341 "block_size": 4096, 00:16:43.341 "num_blocks": 26476544, 00:16:43.341 "uuid": "2a017b2f-849d-4e5b-9ea2-f7b6a8762df8", 00:16:43.341 "assigned_rate_limits": { 00:16:43.341 "rw_ios_per_sec": 0, 00:16:43.341 "rw_mbytes_per_sec": 0, 00:16:43.341 "r_mbytes_per_sec": 0, 00:16:43.341 "w_mbytes_per_sec": 0 00:16:43.341 }, 00:16:43.341 "claimed": false, 00:16:43.341 "zoned": false, 00:16:43.341 "supported_io_types": { 00:16:43.341 "read": true, 00:16:43.341 "write": true, 00:16:43.341 "unmap": true, 00:16:43.341 "flush": false, 00:16:43.341 "reset": true, 00:16:43.341 "nvme_admin": false, 00:16:43.341 "nvme_io": false, 00:16:43.341 "nvme_io_md": false, 00:16:43.341 "write_zeroes": true, 00:16:43.341 "zcopy": false, 00:16:43.341 "get_zone_info": false, 00:16:43.341 "zone_management": false, 00:16:43.341 "zone_append": false, 00:16:43.341 "compare": false, 00:16:43.341 "compare_and_write": false, 00:16:43.341 "abort": false, 00:16:43.341 "seek_hole": true, 00:16:43.341 "seek_data": true, 00:16:43.341 "copy": false, 00:16:43.341 "nvme_iov_md": false 00:16:43.341 }, 00:16:43.341 "driver_specific": { 00:16:43.341 "lvol": { 00:16:43.341 "lvol_store_uuid": "f9062c08-201a-4ac9-b54f-e26b71000d9a", 00:16:43.341 "base_bdev": "nvme0n1", 00:16:43.341 "thin_provision": true, 00:16:43.341 "num_allocated_clusters": 0, 00:16:43.341 "snapshot": false, 00:16:43.341 "clone": false, 00:16:43.341 "esnap_clone": false 00:16:43.341 } 00:16:43.341 } 00:16:43.341 } 00:16:43.341 ]' 00:16:43.341 11:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:43.605 11:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:43.605 11:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:43.605 11:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:43.605 11:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:43.605 11:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:43.605 11:59:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:43.605 11:59:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2a017b2f-849d-4e5b-9ea2-f7b6a8762df8 -c nvc0n1p0 --l2p_dram_limit 20 00:16:43.605 [2024-11-18 11:59:41.273750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.273818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:43.605 [2024-11-18 11:59:41.273834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:43.605 [2024-11-18 11:59:41.273845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.273914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.273929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.605 [2024-11-18 11:59:41.273938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:43.605 [2024-11-18 11:59:41.273948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.273968] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:43.605 [2024-11-18 11:59:41.274848] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:43.605 [2024-11-18 11:59:41.274870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.274880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.605 [2024-11-18 11:59:41.274889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:16:43.605 [2024-11-18 11:59:41.274900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.274982] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f79d5c70-bc24-478a-b931-c20b7d618819 00:16:43.605 [2024-11-18 11:59:41.276792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.276842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:43.605 [2024-11-18 11:59:41.276858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:43.605 [2024-11-18 11:59:41.276870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.285472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.285514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.605 [2024-11-18 11:59:41.285527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.558 ms 00:16:43.605 [2024-11-18 11:59:41.285535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.285659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.285670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.605 [2024-11-18 11:59:41.285685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:43.605 [2024-11-18 11:59:41.285693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.285776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.285787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:43.605 [2024-11-18 11:59:41.285799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:43.605 [2024-11-18 11:59:41.285806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.285831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:43.605 [2024-11-18 11:59:41.290405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.290449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.605 [2024-11-18 11:59:41.290460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:16:43.605 [2024-11-18 11:59:41.290471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.290511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.605 [2024-11-18 11:59:41.290521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:43.605 [2024-11-18 11:59:41.290530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:43.605 [2024-11-18 11:59:41.290539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.605 [2024-11-18 11:59:41.290573] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:43.605 [2024-11-18 11:59:41.290745] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:43.605 [2024-11-18 11:59:41.290758] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:43.605 [2024-11-18 11:59:41.290772] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:43.605 [2024-11-18 11:59:41.290782] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:43.606 [2024-11-18 11:59:41.290794] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:43.606 [2024-11-18 11:59:41.290803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:43.606 [2024-11-18 11:59:41.290813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:43.606 [2024-11-18 11:59:41.290820] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:43.606 [2024-11-18 11:59:41.290830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:43.606 [2024-11-18 11:59:41.290838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.606 [2024-11-18 11:59:41.290850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:43.606 [2024-11-18 11:59:41.290858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:16:43.606 [2024-11-18 11:59:41.290868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.606 [2024-11-18 11:59:41.290949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.606 [2024-11-18 11:59:41.290962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:43.606 [2024-11-18 11:59:41.290970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:43.606 [2024-11-18 11:59:41.290982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.606 [2024-11-18 11:59:41.291072] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:43.606 [2024-11-18 11:59:41.291084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:43.606 [2024-11-18 11:59:41.291094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:43.606 [2024-11-18 11:59:41.291122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:43.606 [2024-11-18 11:59:41.291144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.606 [2024-11-18 11:59:41.291159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:43.606 [2024-11-18 11:59:41.291169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:43.606 [2024-11-18 11:59:41.291175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.606 [2024-11-18 11:59:41.291193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:43.606 [2024-11-18 11:59:41.291199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:43.606 [2024-11-18 11:59:41.291210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:43.606 [2024-11-18 11:59:41.291225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:43.606 [2024-11-18 11:59:41.291252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:43.606 [2024-11-18 11:59:41.291276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:43.606 [2024-11-18 11:59:41.291297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:43.606 [2024-11-18 11:59:41.291322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:43.606 [2024-11-18 11:59:41.291346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.606 [2024-11-18 11:59:41.291362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:43.606 [2024-11-18 11:59:41.291386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:43.606 [2024-11-18 11:59:41.291392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.606 [2024-11-18 11:59:41.291401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:43.606 [2024-11-18 11:59:41.291408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:43.606 [2024-11-18 11:59:41.291416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:43.606 [2024-11-18 11:59:41.291431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:43.606 [2024-11-18 11:59:41.291438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291447] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:43.606 [2024-11-18 11:59:41.291454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:43.606 [2024-11-18 11:59:41.291464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.606 [2024-11-18 11:59:41.291487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:43.606 [2024-11-18 11:59:41.291494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:43.606 [2024-11-18 11:59:41.291502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:43.606 [2024-11-18 11:59:41.291509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:43.606 [2024-11-18 11:59:41.291520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:43.606 [2024-11-18 11:59:41.291526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:43.606 [2024-11-18 11:59:41.291540] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:43.606 [2024-11-18 11:59:41.291550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.606 [2024-11-18 11:59:41.291561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:43.606 [2024-11-18 11:59:41.291569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:43.606 [2024-11-18 11:59:41.291578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:43.606 [2024-11-18 11:59:41.291600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:43.606 [2024-11-18 11:59:41.291610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:43.606 [2024-11-18 11:59:41.291617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:43.606 [2024-11-18 11:59:41.291626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:43.606 [2024-11-18 11:59:41.291634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:43.606 [2024-11-18 11:59:41.291646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:43.606 [2024-11-18 11:59:41.291653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:43.606 [2024-11-18 11:59:41.291663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:43.606 [2024-11-18 11:59:41.291671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:43.606 [2024-11-18 11:59:41.291682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:43.606 [2024-11-18 11:59:41.291690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:43.606 [2024-11-18 11:59:41.291700] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:43.606 [2024-11-18 11:59:41.291708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.606 [2024-11-18 11:59:41.291721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:43.606 [2024-11-18 11:59:41.291729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:43.606 [2024-11-18 11:59:41.291738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:43.606 [2024-11-18 11:59:41.291761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:43.606 [2024-11-18 11:59:41.291772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.606 [2024-11-18 11:59:41.291782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:43.606 [2024-11-18 11:59:41.291791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:16:43.606 [2024-11-18 11:59:41.291800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.606 [2024-11-18 11:59:41.291838] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:43.606 [2024-11-18 11:59:41.291848] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:47.812 [2024-11-18 11:59:45.121135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.121404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:47.812 [2024-11-18 11:59:45.121451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3829.271 ms 00:16:47.812 [2024-11-18 11:59:45.121462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.154214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.154275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.812 [2024-11-18 11:59:45.154293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.444 ms 00:16:47.812 [2024-11-18 11:59:45.154302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.154452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.154464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:47.812 [2024-11-18 11:59:45.154479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:47.812 [2024-11-18 11:59:45.154487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.208253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.208311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.812 [2024-11-18 11:59:45.208330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.728 ms 00:16:47.812 [2024-11-18 11:59:45.208340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.208384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.208397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.812 [2024-11-18 11:59:45.208408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:47.812 [2024-11-18 11:59:45.208416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.209045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.209070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.812 [2024-11-18 11:59:45.209083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:16:47.812 [2024-11-18 11:59:45.209091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.209226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.209255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.812 [2024-11-18 11:59:45.209274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:16:47.812 [2024-11-18 11:59:45.209286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.225977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.226202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.812 [2024-11-18 11:59:45.226236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.662 ms 00:16:47.812 [2024-11-18 11:59:45.226250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.239431] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:47.812 [2024-11-18 11:59:45.246494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.246549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:47.812 [2024-11-18 11:59:45.246562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.138 ms 00:16:47.812 [2024-11-18 11:59:45.246572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.338297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.338378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:47.812 [2024-11-18 11:59:45.338395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.674 ms 00:16:47.812 [2024-11-18 11:59:45.338407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.338638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.338657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:47.812 [2024-11-18 11:59:45.338667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:16:47.812 [2024-11-18 11:59:45.338679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.364850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.365101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:47.812 [2024-11-18 11:59:45.365130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.114 ms 00:16:47.812 [2024-11-18 11:59:45.365146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.390125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.390181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:47.812 [2024-11-18 11:59:45.390195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.929 ms 00:16:47.812 [2024-11-18 11:59:45.390205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.390868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.390897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:47.812 [2024-11-18 11:59:45.390912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:16:47.812 [2024-11-18 11:59:45.390927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.480529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.480803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:47.812 [2024-11-18 11:59:45.480831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.548 ms 00:16:47.812 [2024-11-18 11:59:45.480844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.812 [2024-11-18 11:59:45.508353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.812 [2024-11-18 11:59:45.508547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:47.812 [2024-11-18 11:59:45.508569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.418 ms 00:16:47.812 [2024-11-18 11:59:45.508602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.074 [2024-11-18 11:59:45.534294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.074 [2024-11-18 11:59:45.534353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:48.074 [2024-11-18 11:59:45.534365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.588 ms 00:16:48.074 [2024-11-18 11:59:45.534375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.074 [2024-11-18 11:59:45.560858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.074 [2024-11-18 11:59:45.560915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:48.074 [2024-11-18 11:59:45.560929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.436 ms 00:16:48.074 [2024-11-18 11:59:45.560938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.074 [2024-11-18 11:59:45.560990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.074 [2024-11-18 11:59:45.561006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:48.074 [2024-11-18 11:59:45.561016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:48.074 [2024-11-18 11:59:45.561027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.074 [2024-11-18 11:59:45.561119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.074 [2024-11-18 11:59:45.561132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:48.074 [2024-11-18 11:59:45.561141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:48.074 [2024-11-18 11:59:45.561151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.074 [2024-11-18 11:59:45.562317] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4288.087 ms, result 0 00:16:48.074 { 00:16:48.074 "name": "ftl0", 00:16:48.074 "uuid": "f79d5c70-bc24-478a-b931-c20b7d618819" 00:16:48.074 } 00:16:48.074 11:59:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:48.074 11:59:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:48.074 11:59:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:48.335 11:59:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:48.335 [2024-11-18 11:59:45.910500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:48.335 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:48.335 Zero copy mechanism will not be used. 00:16:48.335 Running I/O for 4 seconds... 00:16:50.647 2252.00 IOPS, 149.55 MiB/s [2024-11-18T11:59:49.292Z] 2089.00 IOPS, 138.72 MiB/s [2024-11-18T11:59:50.233Z] 1634.33 IOPS, 108.53 MiB/s [2024-11-18T11:59:50.233Z] 1439.75 IOPS, 95.61 MiB/s 00:16:52.532 Latency(us) 00:16:52.532 [2024-11-18T11:59:50.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.532 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:52.532 ftl0 : 4.00 1439.36 95.58 0.00 0.00 729.40 156.75 2835.69 00:16:52.532 [2024-11-18T11:59:50.233Z] =================================================================================================================== 00:16:52.532 [2024-11-18T11:59:50.233Z] Total : 1439.36 95.58 0.00 0.00 729.40 156.75 2835.69 00:16:52.532 { 00:16:52.532 "results": [ 00:16:52.532 { 00:16:52.532 "job": "ftl0", 00:16:52.532 "core_mask": "0x1", 00:16:52.532 "workload": "randwrite", 00:16:52.532 "status": "finished", 00:16:52.532 "queue_depth": 1, 00:16:52.532 "io_size": 69632, 00:16:52.532 "runtime": 4.001785, 00:16:52.532 "iops": 1439.3576866323403, 00:16:52.532 "mibps": 95.58234637792886, 00:16:52.532 "io_failed": 0, 00:16:52.532 "io_timeout": 0, 00:16:52.532 "avg_latency_us": 729.3954871794871, 00:16:52.532 "min_latency_us": 156.75076923076924, 00:16:52.532 "max_latency_us": 2835.6923076923076 00:16:52.532 } 00:16:52.532 ], 00:16:52.532 "core_count": 1 00:16:52.532 } 00:16:52.532 [2024-11-18 11:59:49.922023] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:52.532 11:59:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:52.532 [2024-11-18 11:59:50.034604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:52.532 Running I/O for 4 seconds... 00:16:54.419 6927.00 IOPS, 27.06 MiB/s [2024-11-18T11:59:53.066Z] 6189.50 IOPS, 24.18 MiB/s [2024-11-18T11:59:54.454Z] 5862.00 IOPS, 22.90 MiB/s [2024-11-18T11:59:54.454Z] 5605.25 IOPS, 21.90 MiB/s 00:16:56.753 Latency(us) 00:16:56.753 [2024-11-18T11:59:54.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.753 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.753 ftl0 : 4.03 5598.23 21.87 0.00 0.00 22785.24 277.27 148413.83 00:16:56.753 [2024-11-18T11:59:54.454Z] =================================================================================================================== 00:16:56.753 [2024-11-18T11:59:54.454Z] Total : 5598.23 21.87 0.00 0.00 22785.24 0.00 148413.83 00:16:56.753 [2024-11-18 11:59:54.071622] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:56.753 { 00:16:56.753 "results": [ 00:16:56.753 { 00:16:56.753 "job": "ftl0", 00:16:56.753 "core_mask": "0x1", 00:16:56.753 "workload": "randwrite", 00:16:56.753 "status": "finished", 00:16:56.753 "queue_depth": 128, 00:16:56.753 "io_size": 4096, 00:16:56.753 "runtime": 4.027879, 00:16:56.753 "iops": 5598.231724438594, 00:16:56.753 "mibps": 21.868092673588258, 00:16:56.753 "io_failed": 0, 00:16:56.753 "io_timeout": 0, 00:16:56.753 "avg_latency_us": 22785.23870545172, 00:16:56.753 "min_latency_us": 277.2676923076923, 00:16:56.753 "max_latency_us": 148413.83384615384 00:16:56.753 } 00:16:56.753 ], 00:16:56.753 "core_count": 1 00:16:56.753 } 00:16:56.753 11:59:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:56.753 [2024-11-18 11:59:54.188721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:56.753 Running I/O for 4 seconds... 00:16:58.643 4872.00 IOPS, 19.03 MiB/s [2024-11-18T11:59:57.287Z] 4659.00 IOPS, 18.20 MiB/s [2024-11-18T11:59:58.233Z] 4620.67 IOPS, 18.05 MiB/s [2024-11-18T11:59:58.233Z] 4569.50 IOPS, 17.85 MiB/s 00:17:00.532 Latency(us) 00:17:00.532 [2024-11-18T11:59:58.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.532 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:00.532 Verification LBA range: start 0x0 length 0x1400000 00:17:00.532 ftl0 : 4.01 4586.87 17.92 0.00 0.00 27835.43 330.83 41943.04 00:17:00.532 [2024-11-18T11:59:58.233Z] =================================================================================================================== 00:17:00.532 [2024-11-18T11:59:58.233Z] Total : 4586.87 17.92 0.00 0.00 27835.43 0.00 41943.04 00:17:00.532 [2024-11-18 11:59:58.218000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:00.532 { 00:17:00.532 "results": [ 00:17:00.532 { 00:17:00.532 "job": "ftl0", 00:17:00.532 "core_mask": "0x1", 00:17:00.532 "workload": "verify", 00:17:00.532 "status": "finished", 00:17:00.532 "verify_range": { 00:17:00.532 "start": 0, 00:17:00.532 "length": 20971520 00:17:00.532 }, 00:17:00.532 "queue_depth": 128, 00:17:00.532 "io_size": 4096, 00:17:00.532 "runtime": 4.012755, 00:17:00.532 "iops": 4586.87360678636, 00:17:00.532 "mibps": 17.91747502650922, 00:17:00.532 "io_failed": 0, 00:17:00.532 "io_timeout": 0, 00:17:00.532 "avg_latency_us": 27835.42805941206, 00:17:00.532 "min_latency_us": 330.83076923076925, 00:17:00.532 "max_latency_us": 41943.04 00:17:00.532 } 00:17:00.532 ], 00:17:00.532 "core_count": 1 00:17:00.532 } 00:17:00.794 11:59:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:00.794 [2024-11-18 11:59:58.437273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.794 [2024-11-18 11:59:58.437339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:00.794 [2024-11-18 11:59:58.437356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:00.794 [2024-11-18 11:59:58.437367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.794 [2024-11-18 11:59:58.437390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.794 [2024-11-18 11:59:58.440497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.794 [2024-11-18 11:59:58.440697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:00.794 [2024-11-18 11:59:58.440725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:17:00.794 [2024-11-18 11:59:58.440733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.794 [2024-11-18 11:59:58.443779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.794 [2024-11-18 11:59:58.443931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:00.794 [2024-11-18 11:59:58.443957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.003 ms 00:17:00.794 [2024-11-18 11:59:58.443966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.056 [2024-11-18 11:59:58.663062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.056 [2024-11-18 11:59:58.663271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:01.056 [2024-11-18 11:59:58.663305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 219.062 ms 00:17:01.056 [2024-11-18 11:59:58.663314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.056 [2024-11-18 11:59:58.669504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.056 [2024-11-18 11:59:58.669546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:01.056 [2024-11-18 11:59:58.669563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:17:01.056 [2024-11-18 11:59:58.669571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.056 [2024-11-18 11:59:58.695830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.056 [2024-11-18 11:59:58.695877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:01.056 [2024-11-18 11:59:58.695894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.175 ms 00:17:01.056 [2024-11-18 11:59:58.695902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.056 [2024-11-18 11:59:58.713019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.056 [2024-11-18 11:59:58.713209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:01.056 [2024-11-18 11:59:58.713240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.064 ms 00:17:01.056 [2024-11-18 11:59:58.713249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.056 [2024-11-18 11:59:58.713406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.056 [2024-11-18 11:59:58.713418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:01.056 [2024-11-18 11:59:58.713434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:01.056 [2024-11-18 11:59:58.713442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.056 [2024-11-18 11:59:58.738909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.056 [2024-11-18 11:59:58.738957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:01.056 [2024-11-18 11:59:58.738971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.445 ms 00:17:01.056 [2024-11-18 11:59:58.738978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.318 [2024-11-18 11:59:58.763734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.318 [2024-11-18 11:59:58.763901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:01.318 [2024-11-18 11:59:58.763925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.704 ms 00:17:01.318 [2024-11-18 11:59:58.763932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.318 [2024-11-18 11:59:58.788366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.318 [2024-11-18 11:59:58.788409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:01.318 [2024-11-18 11:59:58.788424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.389 ms 00:17:01.318 [2024-11-18 11:59:58.788432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.318 [2024-11-18 11:59:58.812799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.318 [2024-11-18 11:59:58.812845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:01.318 [2024-11-18 11:59:58.812862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.275 ms 00:17:01.318 [2024-11-18 11:59:58.812870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.318 [2024-11-18 11:59:58.812915] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:01.318 [2024-11-18 11:59:58.812932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:01.318 [2024-11-18 11:59:58.812945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:01.318 [2024-11-18 11:59:58.812953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.812963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.812970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.812980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.812988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.812999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:01.319 [2024-11-18 11:59:58.813771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:01.320 [2024-11-18 11:59:58.813868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:01.320 [2024-11-18 11:59:58.813879] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f79d5c70-bc24-478a-b931-c20b7d618819 00:17:01.320 [2024-11-18 11:59:58.813887] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:01.320 [2024-11-18 11:59:58.813897] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:01.320 [2024-11-18 11:59:58.813907] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:01.320 [2024-11-18 11:59:58.813930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:01.320 [2024-11-18 11:59:58.813938] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:01.320 [2024-11-18 11:59:58.813948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:01.320 [2024-11-18 11:59:58.813955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:01.320 [2024-11-18 11:59:58.813966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:01.320 [2024-11-18 11:59:58.813972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:01.320 [2024-11-18 11:59:58.813982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.320 [2024-11-18 11:59:58.813990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:01.320 [2024-11-18 11:59:58.814001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:17:01.320 [2024-11-18 11:59:58.814009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.320 [2024-11-18 11:59:58.827624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.320 [2024-11-18 11:59:58.827664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:01.320 [2024-11-18 11:59:58.827678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.574 ms 00:17:01.320 [2024-11-18 11:59:58.827687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.320 [2024-11-18 11:59:58.828091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.320 [2024-11-18 11:59:58.828106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:01.320 [2024-11-18 11:59:58.828117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:17:01.320 [2024-11-18 11:59:58.828124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.320 [2024-11-18 11:59:58.866857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.320 [2024-11-18 11:59:58.867032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.320 [2024-11-18 11:59:58.867061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.320 [2024-11-18 11:59:58.867069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.320 [2024-11-18 11:59:58.867142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.320 [2024-11-18 11:59:58.867150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.320 [2024-11-18 11:59:58.867160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.320 [2024-11-18 11:59:58.867168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.320 [2024-11-18 11:59:58.867266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.320 [2024-11-18 11:59:58.867282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.320 [2024-11-18 11:59:58.867293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.320 [2024-11-18 11:59:58.867301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.320 [2024-11-18 11:59:58.867319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.320 [2024-11-18 11:59:58.867327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.320 [2024-11-18 11:59:58.867338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.320 [2024-11-18 11:59:58.867346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.320 [2024-11-18 11:59:58.950631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.320 [2024-11-18 11:59:58.950685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.320 [2024-11-18 11:59:58.950704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.320 [2024-11-18 11:59:58.950713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.582 [2024-11-18 11:59:59.019418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.582 [2024-11-18 11:59:59.019476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.582 [2024-11-18 11:59:59.019492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.582 [2024-11-18 11:59:59.019500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.582 [2024-11-18 11:59:59.019610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.582 [2024-11-18 11:59:59.019622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.582 [2024-11-18 11:59:59.019636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.582 [2024-11-18 11:59:59.019645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.582 [2024-11-18 11:59:59.019714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.582 [2024-11-18 11:59:59.019725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.582 [2024-11-18 11:59:59.019737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.582 [2024-11-18 11:59:59.019745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.582 [2024-11-18 11:59:59.019855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.582 [2024-11-18 11:59:59.019865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.582 [2024-11-18 11:59:59.019881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.582 [2024-11-18 11:59:59.019889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.582 [2024-11-18 11:59:59.019923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.582 [2024-11-18 11:59:59.019932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:01.582 [2024-11-18 11:59:59.019943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.582 [2024-11-18 11:59:59.019950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.582 [2024-11-18 11:59:59.019993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.582 [2024-11-18 11:59:59.020002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.582 [2024-11-18 11:59:59.020013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.582 [2024-11-18 11:59:59.020023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.582 [2024-11-18 11:59:59.020071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.582 [2024-11-18 11:59:59.020088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.582 [2024-11-18 11:59:59.020099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.582 [2024-11-18 11:59:59.020108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.582 [2024-11-18 11:59:59.020253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 582.928 ms, result 0 00:17:01.582 true 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73124 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73124 ']' 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73124 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73124 00:17:01.582 killing process with pid 73124 00:17:01.582 Received shutdown signal, test time was about 4.000000 seconds 00:17:01.582 00:17:01.582 Latency(us) 00:17:01.582 [2024-11-18T11:59:59.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.582 [2024-11-18T11:59:59.283Z] =================================================================================================================== 00:17:01.582 [2024-11-18T11:59:59.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73124' 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73124 00:17:01.582 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73124 00:17:02.153 11:59:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:02.153 11:59:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:02.153 11:59:59 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:02.153 Remove shared memory files 00:17:02.153 11:59:59 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:02.415 11:59:59 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:02.415 11:59:59 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:02.415 11:59:59 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:02.415 11:59:59 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:02.415 ************************************ 00:17:02.415 END TEST ftl_bdevperf 00:17:02.415 ************************************ 00:17:02.415 00:17:02.415 real 0m22.663s 00:17:02.415 user 0m25.313s 00:17:02.415 sys 0m0.935s 00:17:02.415 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:02.415 11:59:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:02.415 11:59:59 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:02.415 11:59:59 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:02.415 11:59:59 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:02.415 11:59:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:02.415 ************************************ 00:17:02.415 START TEST ftl_trim 00:17:02.415 ************************************ 00:17:02.415 11:59:59 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:02.415 * Looking for test storage... 00:17:02.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.415 12:00:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:02.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.415 --rc genhtml_branch_coverage=1 00:17:02.415 --rc genhtml_function_coverage=1 00:17:02.415 --rc genhtml_legend=1 00:17:02.415 --rc geninfo_all_blocks=1 00:17:02.415 --rc geninfo_unexecuted_blocks=1 00:17:02.415 00:17:02.415 ' 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:02.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.415 --rc genhtml_branch_coverage=1 00:17:02.415 --rc genhtml_function_coverage=1 00:17:02.415 --rc genhtml_legend=1 00:17:02.415 --rc geninfo_all_blocks=1 00:17:02.415 --rc geninfo_unexecuted_blocks=1 00:17:02.415 00:17:02.415 ' 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:02.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.415 --rc genhtml_branch_coverage=1 00:17:02.415 --rc genhtml_function_coverage=1 00:17:02.415 --rc genhtml_legend=1 00:17:02.415 --rc geninfo_all_blocks=1 00:17:02.415 --rc geninfo_unexecuted_blocks=1 00:17:02.415 00:17:02.415 ' 00:17:02.415 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:02.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.415 --rc genhtml_branch_coverage=1 00:17:02.415 --rc genhtml_function_coverage=1 00:17:02.415 --rc genhtml_legend=1 00:17:02.415 --rc geninfo_all_blocks=1 00:17:02.415 --rc geninfo_unexecuted_blocks=1 00:17:02.415 00:17:02.415 ' 00:17:02.415 12:00:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:02.415 12:00:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:02.415 12:00:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:02.415 12:00:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:02.415 12:00:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73480 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:02.677 12:00:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73480 00:17:02.677 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73480 ']' 00:17:02.677 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.677 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:02.677 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.677 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:02.677 12:00:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:02.677 [2024-11-18 12:00:00.211616] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:02.677 [2024-11-18 12:00:00.211930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73480 ] 00:17:02.939 [2024-11-18 12:00:00.378683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.939 [2024-11-18 12:00:00.505339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.939 [2024-11-18 12:00:00.505697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.939 [2024-11-18 12:00:00.505845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.512 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:03.512 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:03.512 12:00:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:03.512 12:00:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:03.512 12:00:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:03.512 12:00:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:03.512 12:00:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:03.512 12:00:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:04.083 12:00:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:04.083 12:00:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:04.084 12:00:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:04.084 { 00:17:04.084 "name": "nvme0n1", 00:17:04.084 "aliases": [ 00:17:04.084 "4312ffb6-268d-41eb-8473-804a54e340f4" 00:17:04.084 ], 00:17:04.084 "product_name": "NVMe disk", 00:17:04.084 "block_size": 4096, 00:17:04.084 "num_blocks": 1310720, 00:17:04.084 "uuid": "4312ffb6-268d-41eb-8473-804a54e340f4", 00:17:04.084 "numa_id": -1, 00:17:04.084 "assigned_rate_limits": { 00:17:04.084 "rw_ios_per_sec": 0, 00:17:04.084 "rw_mbytes_per_sec": 0, 00:17:04.084 "r_mbytes_per_sec": 0, 00:17:04.084 "w_mbytes_per_sec": 0 00:17:04.084 }, 00:17:04.084 "claimed": true, 00:17:04.084 "claim_type": "read_many_write_one", 00:17:04.084 "zoned": false, 00:17:04.084 "supported_io_types": { 00:17:04.084 "read": true, 00:17:04.084 "write": true, 00:17:04.084 "unmap": true, 00:17:04.084 "flush": true, 00:17:04.084 "reset": true, 00:17:04.084 "nvme_admin": true, 00:17:04.084 "nvme_io": true, 00:17:04.084 "nvme_io_md": false, 00:17:04.084 "write_zeroes": true, 00:17:04.084 "zcopy": false, 00:17:04.084 "get_zone_info": false, 00:17:04.084 "zone_management": false, 00:17:04.084 "zone_append": false, 00:17:04.084 "compare": true, 00:17:04.084 "compare_and_write": false, 00:17:04.084 "abort": true, 00:17:04.084 "seek_hole": false, 00:17:04.084 "seek_data": false, 00:17:04.084 "copy": true, 00:17:04.084 "nvme_iov_md": false 00:17:04.084 }, 00:17:04.084 "driver_specific": { 00:17:04.084 "nvme": [ 00:17:04.084 { 00:17:04.084 "pci_address": "0000:00:11.0", 00:17:04.084 "trid": { 00:17:04.084 "trtype": "PCIe", 00:17:04.084 "traddr": "0000:00:11.0" 00:17:04.084 }, 00:17:04.084 "ctrlr_data": { 00:17:04.084 "cntlid": 0, 00:17:04.084 "vendor_id": "0x1b36", 00:17:04.084 "model_number": "QEMU NVMe Ctrl", 00:17:04.084 "serial_number": "12341", 00:17:04.084 "firmware_revision": "8.0.0", 00:17:04.084 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:04.084 "oacs": { 00:17:04.084 "security": 0, 00:17:04.084 "format": 1, 00:17:04.084 "firmware": 0, 00:17:04.084 "ns_manage": 1 00:17:04.084 }, 00:17:04.084 "multi_ctrlr": false, 00:17:04.084 "ana_reporting": false 00:17:04.084 }, 00:17:04.084 "vs": { 00:17:04.084 "nvme_version": "1.4" 00:17:04.084 }, 00:17:04.084 "ns_data": { 00:17:04.084 "id": 1, 00:17:04.084 "can_share": false 00:17:04.084 } 00:17:04.084 } 00:17:04.084 ], 00:17:04.084 "mp_policy": "active_passive" 00:17:04.084 } 00:17:04.084 } 00:17:04.084 ]' 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:04.084 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:04.345 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:04.345 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:04.345 12:00:01 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:17:04.345 12:00:01 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:04.345 12:00:01 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:04.345 12:00:01 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:04.345 12:00:01 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:04.345 12:00:01 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:04.345 12:00:02 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f9062c08-201a-4ac9-b54f-e26b71000d9a 00:17:04.345 12:00:02 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:04.345 12:00:02 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9062c08-201a-4ac9-b54f-e26b71000d9a 00:17:04.607 12:00:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:04.867 12:00:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=fcd531bb-7b0c-4f88-ac97-066ab9e6ba94 00:17:04.867 12:00:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fcd531bb-7b0c-4f88-ac97-066ab9e6ba94 00:17:05.128 12:00:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.128 12:00:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.128 12:00:02 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:05.128 12:00:02 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:05.128 12:00:02 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.128 12:00:02 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:05.128 12:00:02 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.128 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.128 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:05.128 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:05.128 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:05.128 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.390 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:05.390 { 00:17:05.390 "name": "20fa4417-78d8-4cfa-a369-536da6ab2c7f", 00:17:05.390 "aliases": [ 00:17:05.390 "lvs/nvme0n1p0" 00:17:05.390 ], 00:17:05.390 "product_name": "Logical Volume", 00:17:05.390 "block_size": 4096, 00:17:05.390 "num_blocks": 26476544, 00:17:05.390 "uuid": "20fa4417-78d8-4cfa-a369-536da6ab2c7f", 00:17:05.390 "assigned_rate_limits": { 00:17:05.390 "rw_ios_per_sec": 0, 00:17:05.390 "rw_mbytes_per_sec": 0, 00:17:05.390 "r_mbytes_per_sec": 0, 00:17:05.390 "w_mbytes_per_sec": 0 00:17:05.390 }, 00:17:05.390 "claimed": false, 00:17:05.390 "zoned": false, 00:17:05.390 "supported_io_types": { 00:17:05.390 "read": true, 00:17:05.390 "write": true, 00:17:05.390 "unmap": true, 00:17:05.390 "flush": false, 00:17:05.390 "reset": true, 00:17:05.390 "nvme_admin": false, 00:17:05.390 "nvme_io": false, 00:17:05.390 "nvme_io_md": false, 00:17:05.390 "write_zeroes": true, 00:17:05.390 "zcopy": false, 00:17:05.390 "get_zone_info": false, 00:17:05.391 "zone_management": false, 00:17:05.391 "zone_append": false, 00:17:05.391 "compare": false, 00:17:05.391 "compare_and_write": false, 00:17:05.391 "abort": false, 00:17:05.391 "seek_hole": true, 00:17:05.391 "seek_data": true, 00:17:05.391 "copy": false, 00:17:05.391 "nvme_iov_md": false 00:17:05.391 }, 00:17:05.391 "driver_specific": { 00:17:05.391 "lvol": { 00:17:05.391 "lvol_store_uuid": "fcd531bb-7b0c-4f88-ac97-066ab9e6ba94", 00:17:05.391 "base_bdev": "nvme0n1", 00:17:05.391 "thin_provision": true, 00:17:05.391 "num_allocated_clusters": 0, 00:17:05.391 "snapshot": false, 00:17:05.391 "clone": false, 00:17:05.391 "esnap_clone": false 00:17:05.391 } 00:17:05.391 } 00:17:05.391 } 00:17:05.391 ]' 00:17:05.391 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:05.391 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:05.391 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:05.391 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:05.391 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:05.391 12:00:02 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:05.391 12:00:02 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:05.391 12:00:02 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:05.391 12:00:02 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:05.652 12:00:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:05.652 12:00:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:05.652 12:00:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.652 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.652 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:05.652 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:05.652 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:05.652 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:05.913 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:05.913 { 00:17:05.913 "name": "20fa4417-78d8-4cfa-a369-536da6ab2c7f", 00:17:05.913 "aliases": [ 00:17:05.913 "lvs/nvme0n1p0" 00:17:05.913 ], 00:17:05.913 "product_name": "Logical Volume", 00:17:05.913 "block_size": 4096, 00:17:05.913 "num_blocks": 26476544, 00:17:05.913 "uuid": "20fa4417-78d8-4cfa-a369-536da6ab2c7f", 00:17:05.913 "assigned_rate_limits": { 00:17:05.913 "rw_ios_per_sec": 0, 00:17:05.913 "rw_mbytes_per_sec": 0, 00:17:05.913 "r_mbytes_per_sec": 0, 00:17:05.913 "w_mbytes_per_sec": 0 00:17:05.913 }, 00:17:05.913 "claimed": false, 00:17:05.913 "zoned": false, 00:17:05.913 "supported_io_types": { 00:17:05.913 "read": true, 00:17:05.913 "write": true, 00:17:05.913 "unmap": true, 00:17:05.913 "flush": false, 00:17:05.913 "reset": true, 00:17:05.913 "nvme_admin": false, 00:17:05.913 "nvme_io": false, 00:17:05.913 "nvme_io_md": false, 00:17:05.913 "write_zeroes": true, 00:17:05.913 "zcopy": false, 00:17:05.913 "get_zone_info": false, 00:17:05.913 "zone_management": false, 00:17:05.913 "zone_append": false, 00:17:05.913 "compare": false, 00:17:05.913 "compare_and_write": false, 00:17:05.913 "abort": false, 00:17:05.913 "seek_hole": true, 00:17:05.913 "seek_data": true, 00:17:05.913 "copy": false, 00:17:05.913 "nvme_iov_md": false 00:17:05.913 }, 00:17:05.913 "driver_specific": { 00:17:05.913 "lvol": { 00:17:05.913 "lvol_store_uuid": "fcd531bb-7b0c-4f88-ac97-066ab9e6ba94", 00:17:05.913 "base_bdev": "nvme0n1", 00:17:05.913 "thin_provision": true, 00:17:05.913 "num_allocated_clusters": 0, 00:17:05.913 "snapshot": false, 00:17:05.913 "clone": false, 00:17:05.913 "esnap_clone": false 00:17:05.913 } 00:17:05.913 } 00:17:05.913 } 00:17:05.913 ]' 00:17:05.913 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:05.913 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:05.913 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:05.913 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:05.913 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:05.913 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:05.913 12:00:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:05.913 12:00:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:06.172 12:00:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:06.172 12:00:03 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:06.172 12:00:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:06.172 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:06.172 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:06.172 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:06.172 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:06.172 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20fa4417-78d8-4cfa-a369-536da6ab2c7f 00:17:06.431 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:06.431 { 00:17:06.431 "name": "20fa4417-78d8-4cfa-a369-536da6ab2c7f", 00:17:06.431 "aliases": [ 00:17:06.431 "lvs/nvme0n1p0" 00:17:06.431 ], 00:17:06.431 "product_name": "Logical Volume", 00:17:06.431 "block_size": 4096, 00:17:06.431 "num_blocks": 26476544, 00:17:06.431 "uuid": "20fa4417-78d8-4cfa-a369-536da6ab2c7f", 00:17:06.431 "assigned_rate_limits": { 00:17:06.431 "rw_ios_per_sec": 0, 00:17:06.431 "rw_mbytes_per_sec": 0, 00:17:06.431 "r_mbytes_per_sec": 0, 00:17:06.431 "w_mbytes_per_sec": 0 00:17:06.431 }, 00:17:06.431 "claimed": false, 00:17:06.431 "zoned": false, 00:17:06.431 "supported_io_types": { 00:17:06.431 "read": true, 00:17:06.431 "write": true, 00:17:06.431 "unmap": true, 00:17:06.431 "flush": false, 00:17:06.431 "reset": true, 00:17:06.431 "nvme_admin": false, 00:17:06.431 "nvme_io": false, 00:17:06.431 "nvme_io_md": false, 00:17:06.431 "write_zeroes": true, 00:17:06.431 "zcopy": false, 00:17:06.431 "get_zone_info": false, 00:17:06.431 "zone_management": false, 00:17:06.431 "zone_append": false, 00:17:06.431 "compare": false, 00:17:06.431 "compare_and_write": false, 00:17:06.431 "abort": false, 00:17:06.431 "seek_hole": true, 00:17:06.431 "seek_data": true, 00:17:06.431 "copy": false, 00:17:06.431 "nvme_iov_md": false 00:17:06.431 }, 00:17:06.431 "driver_specific": { 00:17:06.431 "lvol": { 00:17:06.431 "lvol_store_uuid": "fcd531bb-7b0c-4f88-ac97-066ab9e6ba94", 00:17:06.431 "base_bdev": "nvme0n1", 00:17:06.431 "thin_provision": true, 00:17:06.431 "num_allocated_clusters": 0, 00:17:06.431 "snapshot": false, 00:17:06.431 "clone": false, 00:17:06.431 "esnap_clone": false 00:17:06.431 } 00:17:06.431 } 00:17:06.431 } 00:17:06.431 ]' 00:17:06.431 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:06.431 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:06.431 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:06.431 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:06.431 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:06.431 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:06.431 12:00:03 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:06.431 12:00:03 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 20fa4417-78d8-4cfa-a369-536da6ab2c7f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:06.431 [2024-11-18 12:00:04.118450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.431 [2024-11-18 12:00:04.118495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:06.431 [2024-11-18 12:00:04.118509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:06.431 [2024-11-18 12:00:04.118516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.431 [2024-11-18 12:00:04.120799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.431 [2024-11-18 12:00:04.120825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.431 [2024-11-18 12:00:04.120833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:17:06.431 [2024-11-18 12:00:04.120840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.431 [2024-11-18 12:00:04.120924] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:06.431 [2024-11-18 12:00:04.121492] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:06.431 [2024-11-18 12:00:04.121510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.431 [2024-11-18 12:00:04.121516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.431 [2024-11-18 12:00:04.121525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:17:06.431 [2024-11-18 12:00:04.121530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.431 [2024-11-18 12:00:04.121635] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ef29c1cf-2535-40a8-a97f-5e20363ab578 00:17:06.431 [2024-11-18 12:00:04.122640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.431 [2024-11-18 12:00:04.122669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:06.431 [2024-11-18 12:00:04.122676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:06.431 [2024-11-18 12:00:04.122684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.431 [2024-11-18 12:00:04.127949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.431 [2024-11-18 12:00:04.128061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.431 [2024-11-18 12:00:04.128078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.192 ms 00:17:06.431 [2024-11-18 12:00:04.128085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.431 [2024-11-18 12:00:04.128187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.431 [2024-11-18 12:00:04.128197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.431 [2024-11-18 12:00:04.128204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:06.431 [2024-11-18 12:00:04.128213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.431 [2024-11-18 12:00:04.128254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.431 [2024-11-18 12:00:04.128262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:06.432 [2024-11-18 12:00:04.128268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:06.432 [2024-11-18 12:00:04.128277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.432 [2024-11-18 12:00:04.128302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:06.708 [2024-11-18 12:00:04.131240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.708 [2024-11-18 12:00:04.131334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.708 [2024-11-18 12:00:04.131350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:17:06.708 [2024-11-18 12:00:04.131357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.708 [2024-11-18 12:00:04.131408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.708 [2024-11-18 12:00:04.131415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:06.708 [2024-11-18 12:00:04.131423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:06.708 [2024-11-18 12:00:04.131442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.708 [2024-11-18 12:00:04.131480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:06.708 [2024-11-18 12:00:04.131622] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:06.708 [2024-11-18 12:00:04.131638] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:06.708 [2024-11-18 12:00:04.131649] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:06.708 [2024-11-18 12:00:04.131661] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:06.708 [2024-11-18 12:00:04.131669] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:06.708 [2024-11-18 12:00:04.131679] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:06.708 [2024-11-18 12:00:04.131686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:06.708 [2024-11-18 12:00:04.131697] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:06.708 [2024-11-18 12:00:04.131706] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:06.708 [2024-11-18 12:00:04.131714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.708 [2024-11-18 12:00:04.131720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:06.708 [2024-11-18 12:00:04.131727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:17:06.708 [2024-11-18 12:00:04.131733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.708 [2024-11-18 12:00:04.131819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.708 [2024-11-18 12:00:04.131826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:06.708 [2024-11-18 12:00:04.131833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:06.708 [2024-11-18 12:00:04.131838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.708 [2024-11-18 12:00:04.131939] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:06.708 [2024-11-18 12:00:04.131946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:06.708 [2024-11-18 12:00:04.131955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.708 [2024-11-18 12:00:04.131961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.708 [2024-11-18 12:00:04.131968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:06.708 [2024-11-18 12:00:04.131973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:06.708 [2024-11-18 12:00:04.131979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:06.708 [2024-11-18 12:00:04.131985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:06.708 [2024-11-18 12:00:04.131991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:06.708 [2024-11-18 12:00:04.131996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.708 [2024-11-18 12:00:04.132002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:06.708 [2024-11-18 12:00:04.132007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:06.708 [2024-11-18 12:00:04.132015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.708 [2024-11-18 12:00:04.132020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:06.708 [2024-11-18 12:00:04.132027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:06.708 [2024-11-18 12:00:04.132032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.708 [2024-11-18 12:00:04.132040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:06.708 [2024-11-18 12:00:04.132045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:06.708 [2024-11-18 12:00:04.132051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.708 [2024-11-18 12:00:04.132057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:06.708 [2024-11-18 12:00:04.132064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:06.709 [2024-11-18 12:00:04.132069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.709 [2024-11-18 12:00:04.132076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:06.709 [2024-11-18 12:00:04.132081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:06.709 [2024-11-18 12:00:04.132087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.709 [2024-11-18 12:00:04.132092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:06.709 [2024-11-18 12:00:04.132101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:06.709 [2024-11-18 12:00:04.132108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.709 [2024-11-18 12:00:04.132118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:06.709 [2024-11-18 12:00:04.132126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:06.709 [2024-11-18 12:00:04.132137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.709 [2024-11-18 12:00:04.132142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:06.709 [2024-11-18 12:00:04.132151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:06.709 [2024-11-18 12:00:04.132156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.709 [2024-11-18 12:00:04.132163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:06.709 [2024-11-18 12:00:04.132168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:06.709 [2024-11-18 12:00:04.132174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.709 [2024-11-18 12:00:04.132179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:06.709 [2024-11-18 12:00:04.132186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:06.709 [2024-11-18 12:00:04.132191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.709 [2024-11-18 12:00:04.132197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:06.709 [2024-11-18 12:00:04.132203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:06.709 [2024-11-18 12:00:04.132209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.709 [2024-11-18 12:00:04.132214] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:06.709 [2024-11-18 12:00:04.132222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:06.709 [2024-11-18 12:00:04.132227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.709 [2024-11-18 12:00:04.132234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.709 [2024-11-18 12:00:04.132240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:06.709 [2024-11-18 12:00:04.132250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:06.709 [2024-11-18 12:00:04.132255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:06.709 [2024-11-18 12:00:04.132262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:06.709 [2024-11-18 12:00:04.132267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:06.709 [2024-11-18 12:00:04.132274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:06.709 [2024-11-18 12:00:04.132281] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:06.709 [2024-11-18 12:00:04.132291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.709 [2024-11-18 12:00:04.132300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:06.709 [2024-11-18 12:00:04.132307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:06.709 [2024-11-18 12:00:04.132312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:06.709 [2024-11-18 12:00:04.132319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:06.709 [2024-11-18 12:00:04.132325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:06.709 [2024-11-18 12:00:04.132331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:06.709 [2024-11-18 12:00:04.132337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:06.709 [2024-11-18 12:00:04.132343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:06.709 [2024-11-18 12:00:04.132349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:06.709 [2024-11-18 12:00:04.132358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:06.709 [2024-11-18 12:00:04.132364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:06.709 [2024-11-18 12:00:04.132371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:06.709 [2024-11-18 12:00:04.132376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:06.709 [2024-11-18 12:00:04.132385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:06.709 [2024-11-18 12:00:04.132390] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:06.709 [2024-11-18 12:00:04.132398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.709 [2024-11-18 12:00:04.132404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:06.709 [2024-11-18 12:00:04.132411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:06.709 [2024-11-18 12:00:04.132417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:06.709 [2024-11-18 12:00:04.132424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:06.709 [2024-11-18 12:00:04.132430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.709 [2024-11-18 12:00:04.132437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:06.709 [2024-11-18 12:00:04.132443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:17:06.709 [2024-11-18 12:00:04.132449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.709 [2024-11-18 12:00:04.132535] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:06.709 [2024-11-18 12:00:04.132546] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:09.240 [2024-11-18 12:00:06.469070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.240 [2024-11-18 12:00:06.469235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:09.240 [2024-11-18 12:00:06.469317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2336.525 ms 00:17:09.240 [2024-11-18 12:00:06.469346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.240 [2024-11-18 12:00:06.494927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.240 [2024-11-18 12:00:06.495074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.240 [2024-11-18 12:00:06.495135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.309 ms 00:17:09.240 [2024-11-18 12:00:06.495161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.240 [2024-11-18 12:00:06.495304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.240 [2024-11-18 12:00:06.495330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:09.240 [2024-11-18 12:00:06.495351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:09.240 [2024-11-18 12:00:06.495374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.240 [2024-11-18 12:00:06.537743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.240 [2024-11-18 12:00:06.537962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.240 [2024-11-18 12:00:06.538062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.161 ms 00:17:09.241 [2024-11-18 12:00:06.538108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.538247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.538471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.241 [2024-11-18 12:00:06.538513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:09.241 [2024-11-18 12:00:06.538546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.538972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.539120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.241 [2024-11-18 12:00:06.539208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:17:09.241 [2024-11-18 12:00:06.539248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.539511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.539634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.241 [2024-11-18 12:00:06.539721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:09.241 [2024-11-18 12:00:06.539807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.555612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.555714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.241 [2024-11-18 12:00:06.555788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.727 ms 00:17:09.241 [2024-11-18 12:00:06.555813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.567558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:09.241 [2024-11-18 12:00:06.582202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.582312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:09.241 [2024-11-18 12:00:06.582367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.947 ms 00:17:09.241 [2024-11-18 12:00:06.582390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.644286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.644465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:09.241 [2024-11-18 12:00:06.644552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.791 ms 00:17:09.241 [2024-11-18 12:00:06.644578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.644816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.644901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:09.241 [2024-11-18 12:00:06.644953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:09.241 [2024-11-18 12:00:06.644974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.668504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.668628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:09.241 [2024-11-18 12:00:06.668682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.415 ms 00:17:09.241 [2024-11-18 12:00:06.668705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.690806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.690904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:09.241 [2024-11-18 12:00:06.690966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.035 ms 00:17:09.241 [2024-11-18 12:00:06.690986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.691621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.691707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:09.241 [2024-11-18 12:00:06.691755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:17:09.241 [2024-11-18 12:00:06.691777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.757644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.757782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:09.241 [2024-11-18 12:00:06.757841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.811 ms 00:17:09.241 [2024-11-18 12:00:06.757864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.781393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.781425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:09.241 [2024-11-18 12:00:06.781439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.422 ms 00:17:09.241 [2024-11-18 12:00:06.781449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.804218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.804329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:09.241 [2024-11-18 12:00:06.804346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.710 ms 00:17:09.241 [2024-11-18 12:00:06.804354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.827297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.827402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:09.241 [2024-11-18 12:00:06.827454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:17:09.241 [2024-11-18 12:00:06.827489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.827603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.827631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:09.241 [2024-11-18 12:00:06.827655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:09.241 [2024-11-18 12:00:06.827674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.827767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.241 [2024-11-18 12:00:06.827844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:09.241 [2024-11-18 12:00:06.827865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:09.241 [2024-11-18 12:00:06.827884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.241 [2024-11-18 12:00:06.828702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:09.241 [2024-11-18 12:00:06.831761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2709.964 ms, result 0 00:17:09.241 [2024-11-18 12:00:06.832607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.241 { 00:17:09.241 "name": "ftl0", 00:17:09.241 "uuid": "ef29c1cf-2535-40a8-a97f-5e20363ab578" 00:17:09.241 } 00:17:09.241 12:00:06 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:09.241 12:00:06 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:17:09.241 12:00:06 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:09.241 12:00:06 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:17:09.241 12:00:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:09.241 12:00:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:09.241 12:00:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:09.500 12:00:07 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:09.758 [ 00:17:09.758 { 00:17:09.758 "name": "ftl0", 00:17:09.758 "aliases": [ 00:17:09.758 "ef29c1cf-2535-40a8-a97f-5e20363ab578" 00:17:09.758 ], 00:17:09.758 "product_name": "FTL disk", 00:17:09.758 "block_size": 4096, 00:17:09.758 "num_blocks": 23592960, 00:17:09.758 "uuid": "ef29c1cf-2535-40a8-a97f-5e20363ab578", 00:17:09.758 "assigned_rate_limits": { 00:17:09.758 "rw_ios_per_sec": 0, 00:17:09.758 "rw_mbytes_per_sec": 0, 00:17:09.758 "r_mbytes_per_sec": 0, 00:17:09.758 "w_mbytes_per_sec": 0 00:17:09.758 }, 00:17:09.758 "claimed": false, 00:17:09.758 "zoned": false, 00:17:09.758 "supported_io_types": { 00:17:09.758 "read": true, 00:17:09.758 "write": true, 00:17:09.758 "unmap": true, 00:17:09.758 "flush": true, 00:17:09.758 "reset": false, 00:17:09.758 "nvme_admin": false, 00:17:09.758 "nvme_io": false, 00:17:09.758 "nvme_io_md": false, 00:17:09.758 "write_zeroes": true, 00:17:09.758 "zcopy": false, 00:17:09.758 "get_zone_info": false, 00:17:09.758 "zone_management": false, 00:17:09.758 "zone_append": false, 00:17:09.758 "compare": false, 00:17:09.758 "compare_and_write": false, 00:17:09.758 "abort": false, 00:17:09.758 "seek_hole": false, 00:17:09.758 "seek_data": false, 00:17:09.758 "copy": false, 00:17:09.758 "nvme_iov_md": false 00:17:09.758 }, 00:17:09.758 "driver_specific": { 00:17:09.758 "ftl": { 00:17:09.758 "base_bdev": "20fa4417-78d8-4cfa-a369-536da6ab2c7f", 00:17:09.758 "cache": "nvc0n1p0" 00:17:09.758 } 00:17:09.758 } 00:17:09.758 } 00:17:09.758 ] 00:17:09.758 12:00:07 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:17:09.758 12:00:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:09.758 12:00:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:09.758 12:00:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:10.017 12:00:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:10.017 12:00:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:10.017 { 00:17:10.017 "name": "ftl0", 00:17:10.017 "aliases": [ 00:17:10.017 "ef29c1cf-2535-40a8-a97f-5e20363ab578" 00:17:10.017 ], 00:17:10.017 "product_name": "FTL disk", 00:17:10.017 "block_size": 4096, 00:17:10.017 "num_blocks": 23592960, 00:17:10.017 "uuid": "ef29c1cf-2535-40a8-a97f-5e20363ab578", 00:17:10.017 "assigned_rate_limits": { 00:17:10.017 "rw_ios_per_sec": 0, 00:17:10.017 "rw_mbytes_per_sec": 0, 00:17:10.017 "r_mbytes_per_sec": 0, 00:17:10.017 "w_mbytes_per_sec": 0 00:17:10.017 }, 00:17:10.017 "claimed": false, 00:17:10.017 "zoned": false, 00:17:10.017 "supported_io_types": { 00:17:10.017 "read": true, 00:17:10.017 "write": true, 00:17:10.017 "unmap": true, 00:17:10.017 "flush": true, 00:17:10.017 "reset": false, 00:17:10.017 "nvme_admin": false, 00:17:10.017 "nvme_io": false, 00:17:10.017 "nvme_io_md": false, 00:17:10.017 "write_zeroes": true, 00:17:10.017 "zcopy": false, 00:17:10.017 "get_zone_info": false, 00:17:10.017 "zone_management": false, 00:17:10.017 "zone_append": false, 00:17:10.017 "compare": false, 00:17:10.017 "compare_and_write": false, 00:17:10.017 "abort": false, 00:17:10.017 "seek_hole": false, 00:17:10.017 "seek_data": false, 00:17:10.017 "copy": false, 00:17:10.017 "nvme_iov_md": false 00:17:10.017 }, 00:17:10.017 "driver_specific": { 00:17:10.017 "ftl": { 00:17:10.017 "base_bdev": "20fa4417-78d8-4cfa-a369-536da6ab2c7f", 00:17:10.017 "cache": "nvc0n1p0" 00:17:10.017 } 00:17:10.017 } 00:17:10.017 } 00:17:10.017 ]' 00:17:10.017 12:00:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:10.017 12:00:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:10.017 12:00:07 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:10.277 [2024-11-18 12:00:07.864572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.864637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:10.277 [2024-11-18 12:00:07.864652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.277 [2024-11-18 12:00:07.864662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.864701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:10.277 [2024-11-18 12:00:07.867296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.867431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:10.277 [2024-11-18 12:00:07.867453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:17:10.277 [2024-11-18 12:00:07.867461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.868075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.868091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:10.277 [2024-11-18 12:00:07.868102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:17:10.277 [2024-11-18 12:00:07.868109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.871765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.871784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:10.277 [2024-11-18 12:00:07.871794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:17:10.277 [2024-11-18 12:00:07.871801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.878798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.878824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:10.277 [2024-11-18 12:00:07.878836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.942 ms 00:17:10.277 [2024-11-18 12:00:07.878843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.902659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.902692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:10.277 [2024-11-18 12:00:07.902706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.729 ms 00:17:10.277 [2024-11-18 12:00:07.902714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.919285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.919318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:10.277 [2024-11-18 12:00:07.919334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.499 ms 00:17:10.277 [2024-11-18 12:00:07.919342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.919563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.919579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:10.277 [2024-11-18 12:00:07.919601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:10.277 [2024-11-18 12:00:07.919608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.942265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.942381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:10.277 [2024-11-18 12:00:07.942399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.624 ms 00:17:10.277 [2024-11-18 12:00:07.942407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.277 [2024-11-18 12:00:07.965189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.277 [2024-11-18 12:00:07.965287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:10.277 [2024-11-18 12:00:07.965340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.723 ms 00:17:10.277 [2024-11-18 12:00:07.965362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.538 [2024-11-18 12:00:07.987299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.538 [2024-11-18 12:00:07.987402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:10.538 [2024-11-18 12:00:07.987453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.848 ms 00:17:10.538 [2024-11-18 12:00:07.987474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.538 [2024-11-18 12:00:08.009654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.538 [2024-11-18 12:00:08.009750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:10.538 [2024-11-18 12:00:08.009798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.035 ms 00:17:10.538 [2024-11-18 12:00:08.009820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.538 [2024-11-18 12:00:08.009923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:10.538 [2024-11-18 12:00:08.009953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.009986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.010975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:10.538 [2024-11-18 12:00:08.011814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.011879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.011911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.011943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.011971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.012973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.013969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.014037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:10.539 [2024-11-18 12:00:08.014056] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:10.539 [2024-11-18 12:00:08.014068] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef29c1cf-2535-40a8-a97f-5e20363ab578 00:17:10.539 [2024-11-18 12:00:08.014076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:10.539 [2024-11-18 12:00:08.014085] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:10.539 [2024-11-18 12:00:08.014094] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:10.539 [2024-11-18 12:00:08.014104] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:10.539 [2024-11-18 12:00:08.014110] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:10.539 [2024-11-18 12:00:08.014119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:10.539 [2024-11-18 12:00:08.014126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:10.539 [2024-11-18 12:00:08.014134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:10.539 [2024-11-18 12:00:08.014140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:10.539 [2024-11-18 12:00:08.014148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.539 [2024-11-18 12:00:08.014156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:10.539 [2024-11-18 12:00:08.014165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.227 ms 00:17:10.539 [2024-11-18 12:00:08.014172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.539 [2024-11-18 12:00:08.026617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.539 [2024-11-18 12:00:08.026645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:10.539 [2024-11-18 12:00:08.026659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.380 ms 00:17:10.539 [2024-11-18 12:00:08.026667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.539 [2024-11-18 12:00:08.027040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.539 [2024-11-18 12:00:08.027059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:10.539 [2024-11-18 12:00:08.027070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:10.539 [2024-11-18 12:00:08.027078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.539 [2024-11-18 12:00:08.070831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.539 [2024-11-18 12:00:08.070867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.539 [2024-11-18 12:00:08.070878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.539 [2024-11-18 12:00:08.070886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.539 [2024-11-18 12:00:08.070983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.539 [2024-11-18 12:00:08.070992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.539 [2024-11-18 12:00:08.071002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.539 [2024-11-18 12:00:08.071009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.539 [2024-11-18 12:00:08.071066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.539 [2024-11-18 12:00:08.071077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.539 [2024-11-18 12:00:08.071089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.539 [2024-11-18 12:00:08.071096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.539 [2024-11-18 12:00:08.071132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.539 [2024-11-18 12:00:08.071140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.539 [2024-11-18 12:00:08.071149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.071156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.153418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.540 [2024-11-18 12:00:08.153460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.540 [2024-11-18 12:00:08.153472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.153479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.217233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.540 [2024-11-18 12:00:08.217400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.540 [2024-11-18 12:00:08.217418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.217425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.217501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.540 [2024-11-18 12:00:08.217510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.540 [2024-11-18 12:00:08.217536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.217543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.217617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.540 [2024-11-18 12:00:08.217626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.540 [2024-11-18 12:00:08.217636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.217643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.217755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.540 [2024-11-18 12:00:08.217765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.540 [2024-11-18 12:00:08.217774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.217783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.217843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.540 [2024-11-18 12:00:08.217852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:10.540 [2024-11-18 12:00:08.217861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.217870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.217925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.540 [2024-11-18 12:00:08.217933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.540 [2024-11-18 12:00:08.217944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.217952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.218014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.540 [2024-11-18 12:00:08.218024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.540 [2024-11-18 12:00:08.218034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.540 [2024-11-18 12:00:08.218041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.540 [2024-11-18 12:00:08.218220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.633 ms, result 0 00:17:10.540 true 00:17:10.540 12:00:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73480 00:17:10.540 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73480 ']' 00:17:10.540 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73480 00:17:10.798 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:10.798 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:10.798 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73480 00:17:10.798 killing process with pid 73480 00:17:10.798 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:10.798 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:10.798 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73480' 00:17:10.798 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73480 00:17:10.798 12:00:08 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73480 00:17:17.364 12:00:14 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:17.625 65536+0 records in 00:17:17.625 65536+0 records out 00:17:17.625 268435456 bytes (268 MB, 256 MiB) copied, 1.09099 s, 246 MB/s 00:17:17.625 12:00:15 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:17.886 [2024-11-18 12:00:15.362225] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:17.886 [2024-11-18 12:00:15.362376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73663 ] 00:17:17.886 [2024-11-18 12:00:15.525726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.147 [2024-11-18 12:00:15.627563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.410 [2024-11-18 12:00:15.904831] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.410 [2024-11-18 12:00:15.904909] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.410 [2024-11-18 12:00:16.064107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.410 [2024-11-18 12:00:16.064154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.410 [2024-11-18 12:00:16.064167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:18.410 [2024-11-18 12:00:16.064176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.410 [2024-11-18 12:00:16.067196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.410 [2024-11-18 12:00:16.067416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.410 [2024-11-18 12:00:16.067436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:17:18.410 [2024-11-18 12:00:16.067444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.410 [2024-11-18 12:00:16.067960] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.410 [2024-11-18 12:00:16.068766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.410 [2024-11-18 12:00:16.068804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.410 [2024-11-18 12:00:16.068813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.410 [2024-11-18 12:00:16.068823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:17:18.410 [2024-11-18 12:00:16.068832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.410 [2024-11-18 12:00:16.070124] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:18.410 [2024-11-18 12:00:16.083175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.410 [2024-11-18 12:00:16.083215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:18.410 [2024-11-18 12:00:16.083227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.053 ms 00:17:18.410 [2024-11-18 12:00:16.083234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.410 [2024-11-18 12:00:16.083331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.410 [2024-11-18 12:00:16.083342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:18.410 [2024-11-18 12:00:16.083351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:18.410 [2024-11-18 12:00:16.083358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.410 [2024-11-18 12:00:16.089397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.411 [2024-11-18 12:00:16.089432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.411 [2024-11-18 12:00:16.089442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.976 ms 00:17:18.411 [2024-11-18 12:00:16.089449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.411 [2024-11-18 12:00:16.089541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.411 [2024-11-18 12:00:16.089551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.411 [2024-11-18 12:00:16.089562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:18.411 [2024-11-18 12:00:16.089570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.411 [2024-11-18 12:00:16.089618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.411 [2024-11-18 12:00:16.089630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.411 [2024-11-18 12:00:16.089639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.411 [2024-11-18 12:00:16.089648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.411 [2024-11-18 12:00:16.089669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:18.411 [2024-11-18 12:00:16.093263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.411 [2024-11-18 12:00:16.093412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.411 [2024-11-18 12:00:16.093431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.599 ms 00:17:18.411 [2024-11-18 12:00:16.093438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.411 [2024-11-18 12:00:16.093494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.411 [2024-11-18 12:00:16.093504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:18.411 [2024-11-18 12:00:16.093512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:18.411 [2024-11-18 12:00:16.093520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.411 [2024-11-18 12:00:16.093555] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:18.411 [2024-11-18 12:00:16.093601] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:18.411 [2024-11-18 12:00:16.093639] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:18.411 [2024-11-18 12:00:16.093655] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:18.411 [2024-11-18 12:00:16.093760] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:18.411 [2024-11-18 12:00:16.093771] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:18.411 [2024-11-18 12:00:16.093782] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:18.411 [2024-11-18 12:00:16.093792] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:18.411 [2024-11-18 12:00:16.093804] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:18.411 [2024-11-18 12:00:16.093813] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:18.411 [2024-11-18 12:00:16.093821] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:18.411 [2024-11-18 12:00:16.093828] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:18.411 [2024-11-18 12:00:16.093835] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:18.411 [2024-11-18 12:00:16.093844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.411 [2024-11-18 12:00:16.093852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:18.411 [2024-11-18 12:00:16.093861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:18.411 [2024-11-18 12:00:16.093868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.411 [2024-11-18 12:00:16.093955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.411 [2024-11-18 12:00:16.093964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:18.411 [2024-11-18 12:00:16.093974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:18.411 [2024-11-18 12:00:16.093982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.411 [2024-11-18 12:00:16.094078] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:18.411 [2024-11-18 12:00:16.094089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:18.411 [2024-11-18 12:00:16.094097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:18.411 [2024-11-18 12:00:16.094123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:18.411 [2024-11-18 12:00:16.094145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.411 [2024-11-18 12:00:16.094159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:18.411 [2024-11-18 12:00:16.094167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:18.411 [2024-11-18 12:00:16.094175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.411 [2024-11-18 12:00:16.094188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:18.411 [2024-11-18 12:00:16.094195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:18.411 [2024-11-18 12:00:16.094201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:18.411 [2024-11-18 12:00:16.094215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:18.411 [2024-11-18 12:00:16.094242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:18.411 [2024-11-18 12:00:16.094262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:18.411 [2024-11-18 12:00:16.094282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:18.411 [2024-11-18 12:00:16.094302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:18.411 [2024-11-18 12:00:16.094323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.411 [2024-11-18 12:00:16.094336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:18.411 [2024-11-18 12:00:16.094343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:18.411 [2024-11-18 12:00:16.094350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.411 [2024-11-18 12:00:16.094357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:18.411 [2024-11-18 12:00:16.094363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:18.411 [2024-11-18 12:00:16.094370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:18.411 [2024-11-18 12:00:16.094383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:18.411 [2024-11-18 12:00:16.094389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094397] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:18.411 [2024-11-18 12:00:16.094405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:18.411 [2024-11-18 12:00:16.094412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.411 [2024-11-18 12:00:16.094428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:18.411 [2024-11-18 12:00:16.094435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:18.411 [2024-11-18 12:00:16.094441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:18.411 [2024-11-18 12:00:16.094448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:18.411 [2024-11-18 12:00:16.094456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:18.411 [2024-11-18 12:00:16.094463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:18.411 [2024-11-18 12:00:16.094471] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:18.411 [2024-11-18 12:00:16.094480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.411 [2024-11-18 12:00:16.094488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:18.411 [2024-11-18 12:00:16.094496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:18.411 [2024-11-18 12:00:16.094503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:18.411 [2024-11-18 12:00:16.094511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:18.411 [2024-11-18 12:00:16.094518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:18.411 [2024-11-18 12:00:16.094525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:18.412 [2024-11-18 12:00:16.094531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:18.412 [2024-11-18 12:00:16.094538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:18.412 [2024-11-18 12:00:16.094545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:18.412 [2024-11-18 12:00:16.094552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:18.412 [2024-11-18 12:00:16.094560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:18.412 [2024-11-18 12:00:16.094567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:18.412 [2024-11-18 12:00:16.094574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:18.412 [2024-11-18 12:00:16.094597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:18.412 [2024-11-18 12:00:16.094607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:18.412 [2024-11-18 12:00:16.094615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.412 [2024-11-18 12:00:16.094624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:18.412 [2024-11-18 12:00:16.094632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:18.412 [2024-11-18 12:00:16.094639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:18.412 [2024-11-18 12:00:16.094647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:18.412 [2024-11-18 12:00:16.094655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.412 [2024-11-18 12:00:16.094663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:18.412 [2024-11-18 12:00:16.094673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:17:18.412 [2024-11-18 12:00:16.094680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.673 [2024-11-18 12:00:16.122610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.673 [2024-11-18 12:00:16.122728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.673 [2024-11-18 12:00:16.122783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.881 ms 00:17:18.673 [2024-11-18 12:00:16.122806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.673 [2024-11-18 12:00:16.122939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.673 [2024-11-18 12:00:16.122969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:18.673 [2024-11-18 12:00:16.123035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:18.673 [2024-11-18 12:00:16.123057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.673 [2024-11-18 12:00:16.165428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.673 [2024-11-18 12:00:16.165572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.673 [2024-11-18 12:00:16.165653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.335 ms 00:17:18.673 [2024-11-18 12:00:16.165683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.673 [2024-11-18 12:00:16.165786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.673 [2024-11-18 12:00:16.165815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.673 [2024-11-18 12:00:16.165835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:18.673 [2024-11-18 12:00:16.165854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.673 [2024-11-18 12:00:16.166201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.673 [2024-11-18 12:00:16.166240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.673 [2024-11-18 12:00:16.166260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:17:18.673 [2024-11-18 12:00:16.166284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.673 [2024-11-18 12:00:16.166423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.166447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.674 [2024-11-18 12:00:16.166522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:17:18.674 [2024-11-18 12:00:16.166545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.180192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.180301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.674 [2024-11-18 12:00:16.180316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.613 ms 00:17:18.674 [2024-11-18 12:00:16.180324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.193210] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:18.674 [2024-11-18 12:00:16.193245] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:18.674 [2024-11-18 12:00:16.193257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.193265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:18.674 [2024-11-18 12:00:16.193273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.831 ms 00:17:18.674 [2024-11-18 12:00:16.193281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.217764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.217805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:18.674 [2024-11-18 12:00:16.217823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.414 ms 00:17:18.674 [2024-11-18 12:00:16.217831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.229478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.229509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:18.674 [2024-11-18 12:00:16.229518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.580 ms 00:17:18.674 [2024-11-18 12:00:16.229525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.241036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.241065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:18.674 [2024-11-18 12:00:16.241076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.427 ms 00:17:18.674 [2024-11-18 12:00:16.241083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.241692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.241712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:18.674 [2024-11-18 12:00:16.241721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:17:18.674 [2024-11-18 12:00:16.241729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.297604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.297648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:18.674 [2024-11-18 12:00:16.297660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.852 ms 00:17:18.674 [2024-11-18 12:00:16.297668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.308044] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:18.674 [2024-11-18 12:00:16.322269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.322304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:18.674 [2024-11-18 12:00:16.322315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.514 ms 00:17:18.674 [2024-11-18 12:00:16.322323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.322398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.322411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:18.674 [2024-11-18 12:00:16.322419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:18.674 [2024-11-18 12:00:16.322427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.322471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.322480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:18.674 [2024-11-18 12:00:16.322488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:18.674 [2024-11-18 12:00:16.322496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.322523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.322531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:18.674 [2024-11-18 12:00:16.322541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.674 [2024-11-18 12:00:16.322549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.322577] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:18.674 [2024-11-18 12:00:16.322608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.322617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:18.674 [2024-11-18 12:00:16.322624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:18.674 [2024-11-18 12:00:16.322631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.346392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.346528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:18.674 [2024-11-18 12:00:16.346545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.739 ms 00:17:18.674 [2024-11-18 12:00:16.346552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.346655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.674 [2024-11-18 12:00:16.346667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:18.674 [2024-11-18 12:00:16.346676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:18.674 [2024-11-18 12:00:16.346683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.674 [2024-11-18 12:00:16.347469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.674 [2024-11-18 12:00:16.350379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 283.065 ms, result 0 00:17:18.674 [2024-11-18 12:00:16.351222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.674 [2024-11-18 12:00:16.364216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:20.060  [2024-11-18T12:00:18.704Z] Copying: 17/256 [MB] (17 MBps) [2024-11-18T12:00:19.641Z] Copying: 39/256 [MB] (21 MBps) [2024-11-18T12:00:20.575Z] Copying: 57/256 [MB] (18 MBps) [2024-11-18T12:00:21.517Z] Copying: 98/256 [MB] (40 MBps) [2024-11-18T12:00:22.460Z] Copying: 111/256 [MB] (12 MBps) [2024-11-18T12:00:23.402Z] Copying: 123/256 [MB] (11 MBps) [2024-11-18T12:00:24.793Z] Copying: 150/256 [MB] (27 MBps) [2024-11-18T12:00:25.737Z] Copying: 165/256 [MB] (14 MBps) [2024-11-18T12:00:26.677Z] Copying: 175/256 [MB] (10 MBps) [2024-11-18T12:00:27.611Z] Copying: 189/256 [MB] (13 MBps) [2024-11-18T12:00:28.178Z] Copying: 224/256 [MB] (35 MBps) [2024-11-18T12:00:28.178Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-18 12:00:28.118935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.477 [2024-11-18 12:00:28.126156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.477 [2024-11-18 12:00:28.126186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.477 [2024-11-18 12:00:28.126196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:30.477 [2024-11-18 12:00:28.126205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.477 [2024-11-18 12:00:28.126221] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:30.477 [2024-11-18 12:00:28.128314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.477 [2024-11-18 12:00:28.128342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.477 [2024-11-18 12:00:28.128350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:17:30.477 [2024-11-18 12:00:28.128357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.477 [2024-11-18 12:00:28.130077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.477 [2024-11-18 12:00:28.130103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.477 [2024-11-18 12:00:28.130110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:17:30.477 [2024-11-18 12:00:28.130116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.477 [2024-11-18 12:00:28.135451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.477 [2024-11-18 12:00:28.135475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.477 [2024-11-18 12:00:28.135486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.322 ms 00:17:30.477 [2024-11-18 12:00:28.135492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.477 [2024-11-18 12:00:28.140898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.477 [2024-11-18 12:00:28.140921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:30.477 [2024-11-18 12:00:28.140928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.381 ms 00:17:30.477 [2024-11-18 12:00:28.140935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.477 [2024-11-18 12:00:28.158231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.477 [2024-11-18 12:00:28.158256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.477 [2024-11-18 12:00:28.158265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.257 ms 00:17:30.477 [2024-11-18 12:00:28.158271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.477 [2024-11-18 12:00:28.169656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.477 [2024-11-18 12:00:28.169777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.477 [2024-11-18 12:00:28.169795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.358 ms 00:17:30.477 [2024-11-18 12:00:28.169803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.477 [2024-11-18 12:00:28.169895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.477 [2024-11-18 12:00:28.169902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.477 [2024-11-18 12:00:28.169909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:30.477 [2024-11-18 12:00:28.169914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.738 [2024-11-18 12:00:28.187321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.738 [2024-11-18 12:00:28.187344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:30.738 [2024-11-18 12:00:28.187352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.394 ms 00:17:30.738 [2024-11-18 12:00:28.187357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.738 [2024-11-18 12:00:28.204806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.738 [2024-11-18 12:00:28.204907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:30.738 [2024-11-18 12:00:28.204919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.403 ms 00:17:30.738 [2024-11-18 12:00:28.204925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.738 [2024-11-18 12:00:28.222680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.738 [2024-11-18 12:00:28.222704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.738 [2024-11-18 12:00:28.222711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.721 ms 00:17:30.738 [2024-11-18 12:00:28.222717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.738 [2024-11-18 12:00:28.240646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.738 [2024-11-18 12:00:28.240669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.738 [2024-11-18 12:00:28.240676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.883 ms 00:17:30.738 [2024-11-18 12:00:28.240682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.738 [2024-11-18 12:00:28.240708] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.738 [2024-11-18 12:00:28.240722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.738 [2024-11-18 12:00:28.240730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.738 [2024-11-18 12:00:28.240736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.738 [2024-11-18 12:00:28.240742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.738 [2024-11-18 12:00:28.240748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.738 [2024-11-18 12:00:28.240754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.738 [2024-11-18 12:00:28.240759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.738 [2024-11-18 12:00:28.240765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.240999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.739 [2024-11-18 12:00:28.241262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.740 [2024-11-18 12:00:28.241272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.740 [2024-11-18 12:00:28.241278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.740 [2024-11-18 12:00:28.241283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.740 [2024-11-18 12:00:28.241289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.740 [2024-11-18 12:00:28.241301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.740 [2024-11-18 12:00:28.241307] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef29c1cf-2535-40a8-a97f-5e20363ab578 00:17:30.740 [2024-11-18 12:00:28.241313] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.740 [2024-11-18 12:00:28.241318] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.740 [2024-11-18 12:00:28.241323] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.740 [2024-11-18 12:00:28.241329] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.740 [2024-11-18 12:00:28.241334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.740 [2024-11-18 12:00:28.241340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.740 [2024-11-18 12:00:28.241345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.740 [2024-11-18 12:00:28.241349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.740 [2024-11-18 12:00:28.241354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.740 [2024-11-18 12:00:28.241359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.740 [2024-11-18 12:00:28.241365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.740 [2024-11-18 12:00:28.241374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:17:30.740 [2024-11-18 12:00:28.241379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.250859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.740 [2024-11-18 12:00:28.250881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.740 [2024-11-18 12:00:28.250889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.467 ms 00:17:30.740 [2024-11-18 12:00:28.250894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.251167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.740 [2024-11-18 12:00:28.251179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.740 [2024-11-18 12:00:28.251185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:30.740 [2024-11-18 12:00:28.251191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.278680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.278705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.740 [2024-11-18 12:00:28.278712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.278718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.278774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.278782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.740 [2024-11-18 12:00:28.278788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.278793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.278826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.278833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.740 [2024-11-18 12:00:28.278838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.278844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.278857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.278863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.740 [2024-11-18 12:00:28.278871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.278876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.338336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.338369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.740 [2024-11-18 12:00:28.338377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.338384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.385918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.385948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.740 [2024-11-18 12:00:28.385960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.385966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.386005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.386012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.740 [2024-11-18 12:00:28.386019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.386025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.386046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.386053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.740 [2024-11-18 12:00:28.386059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.386067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.386136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.386144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.740 [2024-11-18 12:00:28.386150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.386156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.386178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.386185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.740 [2024-11-18 12:00:28.386191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.386197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.386228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.386234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.740 [2024-11-18 12:00:28.386240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.386247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.386280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.740 [2024-11-18 12:00:28.386288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.740 [2024-11-18 12:00:28.386293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.740 [2024-11-18 12:00:28.386301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.740 [2024-11-18 12:00:28.386403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 260.239 ms, result 0 00:17:31.307 00:17:31.307 00:17:31.307 12:00:28 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73817 00:17:31.307 12:00:28 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:31.307 12:00:28 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73817 00:17:31.307 12:00:28 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73817 ']' 00:17:31.307 12:00:28 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.307 12:00:28 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:31.307 12:00:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.307 12:00:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:31.307 12:00:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:31.565 [2024-11-18 12:00:29.042805] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:31.565 [2024-11-18 12:00:29.042891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73817 ] 00:17:31.565 [2024-11-18 12:00:29.190326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.823 [2024-11-18 12:00:29.265407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.418 12:00:29 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:32.418 12:00:29 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:32.418 12:00:29 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:32.418 [2024-11-18 12:00:30.084995] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.418 [2024-11-18 12:00:30.085141] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.678 [2024-11-18 12:00:30.255715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.255837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:32.678 [2024-11-18 12:00:30.255854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:32.678 [2024-11-18 12:00:30.255861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.257942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.257969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.678 [2024-11-18 12:00:30.257978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:17:32.678 [2024-11-18 12:00:30.257984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.258041] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:32.678 [2024-11-18 12:00:30.258546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:32.678 [2024-11-18 12:00:30.258559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.258566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.678 [2024-11-18 12:00:30.258574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:17:32.678 [2024-11-18 12:00:30.258579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.259802] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:32.678 [2024-11-18 12:00:30.269804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.269835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:32.678 [2024-11-18 12:00:30.269846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.007 ms 00:17:32.678 [2024-11-18 12:00:30.269855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.269919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.269930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:32.678 [2024-11-18 12:00:30.269937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:32.678 [2024-11-18 12:00:30.269944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.274321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.274437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.678 [2024-11-18 12:00:30.274450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.340 ms 00:17:32.678 [2024-11-18 12:00:30.274457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.274539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.274548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.678 [2024-11-18 12:00:30.274555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:32.678 [2024-11-18 12:00:30.274564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.274595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.274603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:32.678 [2024-11-18 12:00:30.274609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:32.678 [2024-11-18 12:00:30.274616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.274635] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:32.678 [2024-11-18 12:00:30.277298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.277319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.678 [2024-11-18 12:00:30.277328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:17:32.678 [2024-11-18 12:00:30.277334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.277362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.277368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:32.678 [2024-11-18 12:00:30.277376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:32.678 [2024-11-18 12:00:30.277383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.277400] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:32.678 [2024-11-18 12:00:30.277413] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:32.678 [2024-11-18 12:00:30.277445] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:32.678 [2024-11-18 12:00:30.277457] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:32.678 [2024-11-18 12:00:30.277537] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:32.678 [2024-11-18 12:00:30.277545] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:32.678 [2024-11-18 12:00:30.277559] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:32.678 [2024-11-18 12:00:30.277566] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:32.678 [2024-11-18 12:00:30.277574] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:32.678 [2024-11-18 12:00:30.277683] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:32.678 [2024-11-18 12:00:30.277710] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:32.678 [2024-11-18 12:00:30.277726] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:32.678 [2024-11-18 12:00:30.277744] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:32.678 [2024-11-18 12:00:30.277759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.277775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:32.678 [2024-11-18 12:00:30.277791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:32.678 [2024-11-18 12:00:30.278114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.278206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.678 [2024-11-18 12:00:30.278216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:32.678 [2024-11-18 12:00:30.278224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:32.678 [2024-11-18 12:00:30.278233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.678 [2024-11-18 12:00:30.278311] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:32.678 [2024-11-18 12:00:30.278321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:32.678 [2024-11-18 12:00:30.278327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.678 [2024-11-18 12:00:30.278335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.678 [2024-11-18 12:00:30.278341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:32.679 [2024-11-18 12:00:30.278347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:32.679 [2024-11-18 12:00:30.278364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:32.679 [2024-11-18 12:00:30.278370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.679 [2024-11-18 12:00:30.278381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:32.679 [2024-11-18 12:00:30.278387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:32.679 [2024-11-18 12:00:30.278392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.679 [2024-11-18 12:00:30.278399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:32.679 [2024-11-18 12:00:30.278405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:32.679 [2024-11-18 12:00:30.278411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:32.679 [2024-11-18 12:00:30.278422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:32.679 [2024-11-18 12:00:30.278427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:32.679 [2024-11-18 12:00:30.278444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.679 [2024-11-18 12:00:30.278457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:32.679 [2024-11-18 12:00:30.278464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.679 [2024-11-18 12:00:30.278478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:32.679 [2024-11-18 12:00:30.278484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.679 [2024-11-18 12:00:30.278496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:32.679 [2024-11-18 12:00:30.278502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.679 [2024-11-18 12:00:30.278513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:32.679 [2024-11-18 12:00:30.278519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.679 [2024-11-18 12:00:30.278532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:32.679 [2024-11-18 12:00:30.278538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:32.679 [2024-11-18 12:00:30.278543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.679 [2024-11-18 12:00:30.278549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:32.679 [2024-11-18 12:00:30.278554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:32.679 [2024-11-18 12:00:30.278562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:32.679 [2024-11-18 12:00:30.278573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:32.679 [2024-11-18 12:00:30.278578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278598] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:32.679 [2024-11-18 12:00:30.278607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:32.679 [2024-11-18 12:00:30.278614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.679 [2024-11-18 12:00:30.278619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.679 [2024-11-18 12:00:30.278626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:32.679 [2024-11-18 12:00:30.278631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:32.679 [2024-11-18 12:00:30.278638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:32.679 [2024-11-18 12:00:30.278643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:32.679 [2024-11-18 12:00:30.278649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:32.679 [2024-11-18 12:00:30.278655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:32.679 [2024-11-18 12:00:30.278663] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:32.679 [2024-11-18 12:00:30.278671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.679 [2024-11-18 12:00:30.278680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:32.679 [2024-11-18 12:00:30.278686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:32.679 [2024-11-18 12:00:30.278694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:32.679 [2024-11-18 12:00:30.278699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:32.679 [2024-11-18 12:00:30.278706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:32.679 [2024-11-18 12:00:30.278711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:32.679 [2024-11-18 12:00:30.278718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:32.679 [2024-11-18 12:00:30.278723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:32.679 [2024-11-18 12:00:30.278730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:32.679 [2024-11-18 12:00:30.278736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:32.679 [2024-11-18 12:00:30.278742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:32.679 [2024-11-18 12:00:30.278748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:32.679 [2024-11-18 12:00:30.278754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:32.679 [2024-11-18 12:00:30.278760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:32.679 [2024-11-18 12:00:30.278767] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:32.679 [2024-11-18 12:00:30.278773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.679 [2024-11-18 12:00:30.278782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:32.679 [2024-11-18 12:00:30.278788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:32.679 [2024-11-18 12:00:30.278795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:32.679 [2024-11-18 12:00:30.278800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:32.679 [2024-11-18 12:00:30.278807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.679 [2024-11-18 12:00:30.278814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:32.679 [2024-11-18 12:00:30.278820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:17:32.679 [2024-11-18 12:00:30.278828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.679 [2024-11-18 12:00:30.299496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.679 [2024-11-18 12:00:30.299523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.679 [2024-11-18 12:00:30.299532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.610 ms 00:17:32.679 [2024-11-18 12:00:30.299540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.679 [2024-11-18 12:00:30.299649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.679 [2024-11-18 12:00:30.299659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:32.679 [2024-11-18 12:00:30.299667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:32.679 [2024-11-18 12:00:30.299673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.679 [2024-11-18 12:00:30.323315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.679 [2024-11-18 12:00:30.323341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.679 [2024-11-18 12:00:30.323350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.625 ms 00:17:32.680 [2024-11-18 12:00:30.323356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.680 [2024-11-18 12:00:30.323407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.680 [2024-11-18 12:00:30.323414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.680 [2024-11-18 12:00:30.323421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:32.680 [2024-11-18 12:00:30.323427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.680 [2024-11-18 12:00:30.323722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.680 [2024-11-18 12:00:30.323734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.680 [2024-11-18 12:00:30.323745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:32.680 [2024-11-18 12:00:30.323750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.680 [2024-11-18 12:00:30.323851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.680 [2024-11-18 12:00:30.323858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.680 [2024-11-18 12:00:30.323865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:32.680 [2024-11-18 12:00:30.323871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.680 [2024-11-18 12:00:30.335371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.680 [2024-11-18 12:00:30.335400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.680 [2024-11-18 12:00:30.335409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.482 ms 00:17:32.680 [2024-11-18 12:00:30.335415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.680 [2024-11-18 12:00:30.345189] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:32.680 [2024-11-18 12:00:30.345294] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:32.680 [2024-11-18 12:00:30.345310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.680 [2024-11-18 12:00:30.345316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:32.680 [2024-11-18 12:00:30.345324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.816 ms 00:17:32.680 [2024-11-18 12:00:30.345330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.680 [2024-11-18 12:00:30.364190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.680 [2024-11-18 12:00:30.364297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:32.680 [2024-11-18 12:00:30.364313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.816 ms 00:17:32.680 [2024-11-18 12:00:30.364319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.680 [2024-11-18 12:00:30.373648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.680 [2024-11-18 12:00:30.373672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:32.680 [2024-11-18 12:00:30.373683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.276 ms 00:17:32.680 [2024-11-18 12:00:30.373689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.939 [2024-11-18 12:00:30.382654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.382747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:32.940 [2024-11-18 12:00:30.382761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.922 ms 00:17:32.940 [2024-11-18 12:00:30.382766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.383226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.383242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.940 [2024-11-18 12:00:30.383250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:17:32.940 [2024-11-18 12:00:30.383256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.438033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.438073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:32.940 [2024-11-18 12:00:30.438086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.757 ms 00:17:32.940 [2024-11-18 12:00:30.438093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.445738] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:32.940 [2024-11-18 12:00:30.456975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.457121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.940 [2024-11-18 12:00:30.457136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.818 ms 00:17:32.940 [2024-11-18 12:00:30.457143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.457214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.457223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:32.940 [2024-11-18 12:00:30.457230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:32.940 [2024-11-18 12:00:30.457237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.457275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.457283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.940 [2024-11-18 12:00:30.457289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:32.940 [2024-11-18 12:00:30.457298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.457315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.457323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.940 [2024-11-18 12:00:30.457329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:32.940 [2024-11-18 12:00:30.457338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.457364] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:32.940 [2024-11-18 12:00:30.457376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.457382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:32.940 [2024-11-18 12:00:30.457389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:32.940 [2024-11-18 12:00:30.457395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.475248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.475275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.940 [2024-11-18 12:00:30.475285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.833 ms 00:17:32.940 [2024-11-18 12:00:30.475291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.475361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.940 [2024-11-18 12:00:30.475369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.940 [2024-11-18 12:00:30.475377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:32.940 [2024-11-18 12:00:30.475385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.940 [2024-11-18 12:00:30.476045] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.940 [2024-11-18 12:00:30.478401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 220.104 ms, result 0 00:17:32.940 [2024-11-18 12:00:30.479954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.940 Some configs were skipped because the RPC state that can call them passed over. 00:17:32.940 12:00:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:33.199 [2024-11-18 12:00:30.704947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.199 [2024-11-18 12:00:30.705061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:33.199 [2024-11-18 12:00:30.705107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:17:33.199 [2024-11-18 12:00:30.705127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.199 [2024-11-18 12:00:30.705165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.480 ms, result 0 00:17:33.199 true 00:17:33.199 12:00:30 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:33.457 [2024-11-18 12:00:30.904501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.457 [2024-11-18 12:00:30.904531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:33.457 [2024-11-18 12:00:30.904540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:17:33.457 [2024-11-18 12:00:30.904546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.457 [2024-11-18 12:00:30.904572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.703 ms, result 0 00:17:33.457 true 00:17:33.457 12:00:30 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73817 00:17:33.457 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73817 ']' 00:17:33.457 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73817 00:17:33.457 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:33.457 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.457 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73817 00:17:33.457 killing process with pid 73817 00:17:33.457 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.457 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.458 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73817' 00:17:33.458 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73817 00:17:33.458 12:00:30 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73817 00:17:34.026 [2024-11-18 12:00:31.475884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.475929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:34.026 [2024-11-18 12:00:31.475940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:34.026 [2024-11-18 12:00:31.475947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.475967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:34.026 [2024-11-18 12:00:31.478039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.478062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:34.026 [2024-11-18 12:00:31.478074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:17:34.026 [2024-11-18 12:00:31.478080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.478299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.478307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:34.026 [2024-11-18 12:00:31.478315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:34.026 [2024-11-18 12:00:31.478320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.481949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.482095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:34.026 [2024-11-18 12:00:31.482112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:17:34.026 [2024-11-18 12:00:31.482118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.487438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.487461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:34.026 [2024-11-18 12:00:31.487471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.287 ms 00:17:34.026 [2024-11-18 12:00:31.487478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.495638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.495662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:34.026 [2024-11-18 12:00:31.495673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.113 ms 00:17:34.026 [2024-11-18 12:00:31.495683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.502302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.502330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:34.026 [2024-11-18 12:00:31.502339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.586 ms 00:17:34.026 [2024-11-18 12:00:31.502345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.502449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.502457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:34.026 [2024-11-18 12:00:31.502464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:34.026 [2024-11-18 12:00:31.502470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.510866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.510889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:34.026 [2024-11-18 12:00:31.510898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.380 ms 00:17:34.026 [2024-11-18 12:00:31.510903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.518991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.519015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:34.026 [2024-11-18 12:00:31.519025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.058 ms 00:17:34.026 [2024-11-18 12:00:31.519031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.526441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.526464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:34.026 [2024-11-18 12:00:31.526474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.381 ms 00:17:34.026 [2024-11-18 12:00:31.526480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.533476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.026 [2024-11-18 12:00:31.533588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:34.026 [2024-11-18 12:00:31.533603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.935 ms 00:17:34.026 [2024-11-18 12:00:31.533608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.026 [2024-11-18 12:00:31.533634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:34.026 [2024-11-18 12:00:31.533646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:34.026 [2024-11-18 12:00:31.533768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.533996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:34.027 [2024-11-18 12:00:31.534220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:34.028 [2024-11-18 12:00:31.534294] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:34.028 [2024-11-18 12:00:31.534306] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef29c1cf-2535-40a8-a97f-5e20363ab578 00:17:34.028 [2024-11-18 12:00:31.534317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:34.028 [2024-11-18 12:00:31.534323] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:34.028 [2024-11-18 12:00:31.534329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:34.028 [2024-11-18 12:00:31.534335] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:34.028 [2024-11-18 12:00:31.534341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:34.028 [2024-11-18 12:00:31.534348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:34.028 [2024-11-18 12:00:31.534353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:34.028 [2024-11-18 12:00:31.534359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:34.028 [2024-11-18 12:00:31.534365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:34.028 [2024-11-18 12:00:31.534372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.028 [2024-11-18 12:00:31.534377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:34.028 [2024-11-18 12:00:31.534385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:34.028 [2024-11-18 12:00:31.534390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.543945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.028 [2024-11-18 12:00:31.543968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:34.028 [2024-11-18 12:00:31.543978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.536 ms 00:17:34.028 [2024-11-18 12:00:31.543983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.544267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.028 [2024-11-18 12:00:31.544280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:34.028 [2024-11-18 12:00:31.544292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:34.028 [2024-11-18 12:00:31.544297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.579265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.579291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.028 [2024-11-18 12:00:31.579301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.579307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.579378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.579386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.028 [2024-11-18 12:00:31.579402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.579408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.579442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.579448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.028 [2024-11-18 12:00:31.579457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.579463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.579478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.579484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.028 [2024-11-18 12:00:31.579491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.579497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.638150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.638179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.028 [2024-11-18 12:00:31.638189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.638195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.685792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.685823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.028 [2024-11-18 12:00:31.685833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.685841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.685898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.685906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.028 [2024-11-18 12:00:31.685915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.685921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.685946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.685952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.028 [2024-11-18 12:00:31.685960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.685966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.686035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.686044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.028 [2024-11-18 12:00:31.686052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.686057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.686082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.686089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:34.028 [2024-11-18 12:00:31.686096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.686102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.686133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.686141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.028 [2024-11-18 12:00:31.686149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.686155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.686190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.028 [2024-11-18 12:00:31.686197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.028 [2024-11-18 12:00:31.686205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.028 [2024-11-18 12:00:31.686212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.028 [2024-11-18 12:00:31.686319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.416 ms, result 0 00:17:34.596 12:00:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:34.596 12:00:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:34.596 [2024-11-18 12:00:32.254199] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:34.596 [2024-11-18 12:00:32.254318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73862 ] 00:17:34.856 [2024-11-18 12:00:32.409121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.856 [2024-11-18 12:00:32.483654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.114 [2024-11-18 12:00:32.688217] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:35.114 [2024-11-18 12:00:32.688264] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:35.373 [2024-11-18 12:00:32.841129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.841160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:35.374 [2024-11-18 12:00:32.841170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.374 [2024-11-18 12:00:32.841177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.843263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.843289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.374 [2024-11-18 12:00:32.843297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.074 ms 00:17:35.374 [2024-11-18 12:00:32.843302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.843357] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:35.374 [2024-11-18 12:00:32.843894] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:35.374 [2024-11-18 12:00:32.843908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.843914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.374 [2024-11-18 12:00:32.843921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:17:35.374 [2024-11-18 12:00:32.843926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.844894] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:35.374 [2024-11-18 12:00:32.854522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.854658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:35.374 [2024-11-18 12:00:32.854672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.628 ms 00:17:35.374 [2024-11-18 12:00:32.854679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.854746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.854755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:35.374 [2024-11-18 12:00:32.854761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:35.374 [2024-11-18 12:00:32.854767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.859050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.859075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.374 [2024-11-18 12:00:32.859083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.255 ms 00:17:35.374 [2024-11-18 12:00:32.859088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.859158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.859165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.374 [2024-11-18 12:00:32.859172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:35.374 [2024-11-18 12:00:32.859177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.859193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.859201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:35.374 [2024-11-18 12:00:32.859207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.374 [2024-11-18 12:00:32.859213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.859231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:35.374 [2024-11-18 12:00:32.861984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.862091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.374 [2024-11-18 12:00:32.862104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.756 ms 00:17:35.374 [2024-11-18 12:00:32.862110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.862139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.862145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:35.374 [2024-11-18 12:00:32.862151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:35.374 [2024-11-18 12:00:32.862157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.862170] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:35.374 [2024-11-18 12:00:32.862186] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:35.374 [2024-11-18 12:00:32.862212] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:35.374 [2024-11-18 12:00:32.862224] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:35.374 [2024-11-18 12:00:32.862302] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:35.374 [2024-11-18 12:00:32.862311] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:35.374 [2024-11-18 12:00:32.862319] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:35.374 [2024-11-18 12:00:32.862327] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:35.374 [2024-11-18 12:00:32.862335] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:35.374 [2024-11-18 12:00:32.862341] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:35.374 [2024-11-18 12:00:32.862347] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:35.374 [2024-11-18 12:00:32.862353] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:35.374 [2024-11-18 12:00:32.862359] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:35.374 [2024-11-18 12:00:32.862364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.862370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:35.374 [2024-11-18 12:00:32.862376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:35.374 [2024-11-18 12:00:32.862381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.862448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.374 [2024-11-18 12:00:32.862455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:35.374 [2024-11-18 12:00:32.862462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:35.374 [2024-11-18 12:00:32.862467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.374 [2024-11-18 12:00:32.862539] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:35.374 [2024-11-18 12:00:32.862547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:35.374 [2024-11-18 12:00:32.862553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.374 [2024-11-18 12:00:32.862558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.374 [2024-11-18 12:00:32.862564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:35.374 [2024-11-18 12:00:32.862569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:35.374 [2024-11-18 12:00:32.862574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:35.374 [2024-11-18 12:00:32.862597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:35.374 [2024-11-18 12:00:32.862604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:35.374 [2024-11-18 12:00:32.862609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.374 [2024-11-18 12:00:32.862614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:35.374 [2024-11-18 12:00:32.862621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:35.374 [2024-11-18 12:00:32.862626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.374 [2024-11-18 12:00:32.862635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:35.374 [2024-11-18 12:00:32.862641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:35.374 [2024-11-18 12:00:32.862646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.374 [2024-11-18 12:00:32.862651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:35.374 [2024-11-18 12:00:32.862657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:35.374 [2024-11-18 12:00:32.862661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.374 [2024-11-18 12:00:32.862667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:35.374 [2024-11-18 12:00:32.862672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:35.374 [2024-11-18 12:00:32.862677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.374 [2024-11-18 12:00:32.862682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:35.374 [2024-11-18 12:00:32.862687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:35.374 [2024-11-18 12:00:32.862692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.374 [2024-11-18 12:00:32.862697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:35.374 [2024-11-18 12:00:32.862702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:35.374 [2024-11-18 12:00:32.862707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.374 [2024-11-18 12:00:32.862712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:35.374 [2024-11-18 12:00:32.862718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:35.375 [2024-11-18 12:00:32.862722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.375 [2024-11-18 12:00:32.862727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:35.375 [2024-11-18 12:00:32.862732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:35.375 [2024-11-18 12:00:32.862737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.375 [2024-11-18 12:00:32.862741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:35.375 [2024-11-18 12:00:32.862746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:35.375 [2024-11-18 12:00:32.862751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.375 [2024-11-18 12:00:32.862756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:35.375 [2024-11-18 12:00:32.862761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:35.375 [2024-11-18 12:00:32.862767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.375 [2024-11-18 12:00:32.862772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:35.375 [2024-11-18 12:00:32.862777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:35.375 [2024-11-18 12:00:32.862782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.375 [2024-11-18 12:00:32.862787] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:35.375 [2024-11-18 12:00:32.862793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:35.375 [2024-11-18 12:00:32.862799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.375 [2024-11-18 12:00:32.862806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.375 [2024-11-18 12:00:32.862811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:35.375 [2024-11-18 12:00:32.862816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:35.375 [2024-11-18 12:00:32.862822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:35.375 [2024-11-18 12:00:32.862827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:35.375 [2024-11-18 12:00:32.862832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:35.375 [2024-11-18 12:00:32.862837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:35.375 [2024-11-18 12:00:32.862843] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:35.375 [2024-11-18 12:00:32.862850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.375 [2024-11-18 12:00:32.862856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:35.375 [2024-11-18 12:00:32.862861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:35.375 [2024-11-18 12:00:32.862867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:35.375 [2024-11-18 12:00:32.862872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:35.375 [2024-11-18 12:00:32.862878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:35.375 [2024-11-18 12:00:32.862883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:35.375 [2024-11-18 12:00:32.862889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:35.375 [2024-11-18 12:00:32.862894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:35.375 [2024-11-18 12:00:32.862899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:35.375 [2024-11-18 12:00:32.862904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:35.375 [2024-11-18 12:00:32.862909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:35.375 [2024-11-18 12:00:32.862914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:35.375 [2024-11-18 12:00:32.862920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:35.375 [2024-11-18 12:00:32.862926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:35.375 [2024-11-18 12:00:32.862931] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:35.375 [2024-11-18 12:00:32.862937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.375 [2024-11-18 12:00:32.862943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:35.375 [2024-11-18 12:00:32.862949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:35.375 [2024-11-18 12:00:32.862954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:35.375 [2024-11-18 12:00:32.862959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:35.375 [2024-11-18 12:00:32.862965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.862971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:35.375 [2024-11-18 12:00:32.862979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:17:35.375 [2024-11-18 12:00:32.862985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.883816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.883840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.375 [2024-11-18 12:00:32.883849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.795 ms 00:17:35.375 [2024-11-18 12:00:32.883855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.883945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.883955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:35.375 [2024-11-18 12:00:32.883962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:35.375 [2024-11-18 12:00:32.883967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.927973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.928003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.375 [2024-11-18 12:00:32.928012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.989 ms 00:17:35.375 [2024-11-18 12:00:32.928020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.928082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.928091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.375 [2024-11-18 12:00:32.928098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:35.375 [2024-11-18 12:00:32.928104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.928377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.928390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.375 [2024-11-18 12:00:32.928397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:35.375 [2024-11-18 12:00:32.928403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.928510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.928518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.375 [2024-11-18 12:00:32.928525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:35.375 [2024-11-18 12:00:32.928531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.939233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.939257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.375 [2024-11-18 12:00:32.939265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.686 ms 00:17:35.375 [2024-11-18 12:00:32.939271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.948962] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:35.375 [2024-11-18 12:00:32.948988] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:35.375 [2024-11-18 12:00:32.948997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.949004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:35.375 [2024-11-18 12:00:32.949011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.637 ms 00:17:35.375 [2024-11-18 12:00:32.949016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.967748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.967863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:35.375 [2024-11-18 12:00:32.967876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.686 ms 00:17:35.375 [2024-11-18 12:00:32.967883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.375 [2024-11-18 12:00:32.977101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.375 [2024-11-18 12:00:32.977126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:35.376 [2024-11-18 12:00:32.977133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.166 ms 00:17:35.376 [2024-11-18 12:00:32.977139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:32.985888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:32.985911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:35.376 [2024-11-18 12:00:32.985918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.707 ms 00:17:35.376 [2024-11-18 12:00:32.985924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:32.986382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:32.986397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:35.376 [2024-11-18 12:00:32.986405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:17:35.376 [2024-11-18 12:00:32.986410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.029898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:33.029935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:35.376 [2024-11-18 12:00:33.029945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.469 ms 00:17:35.376 [2024-11-18 12:00:33.029952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.037607] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:35.376 [2024-11-18 12:00:33.048766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:33.048794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:35.376 [2024-11-18 12:00:33.048804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.753 ms 00:17:35.376 [2024-11-18 12:00:33.048810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.048883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:33.048891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:35.376 [2024-11-18 12:00:33.048898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:35.376 [2024-11-18 12:00:33.048904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.048940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:33.048948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:35.376 [2024-11-18 12:00:33.048954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:35.376 [2024-11-18 12:00:33.048960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.048981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:33.048989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:35.376 [2024-11-18 12:00:33.048995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:35.376 [2024-11-18 12:00:33.049001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.049023] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:35.376 [2024-11-18 12:00:33.049030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:33.049036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:35.376 [2024-11-18 12:00:33.049042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:35.376 [2024-11-18 12:00:33.049048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.066736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:33.066848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:35.376 [2024-11-18 12:00:33.066862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.674 ms 00:17:35.376 [2024-11-18 12:00:33.066868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.066936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.376 [2024-11-18 12:00:33.066944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:35.376 [2024-11-18 12:00:33.066951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:35.376 [2024-11-18 12:00:33.066957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.376 [2024-11-18 12:00:33.067607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.376 [2024-11-18 12:00:33.069816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.230 ms, result 0 00:17:35.376 [2024-11-18 12:00:33.070649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.636 [2024-11-18 12:00:33.085503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.579  [2024-11-18T12:00:35.226Z] Copying: 20/256 [MB] (20 MBps) [2024-11-18T12:00:36.172Z] Copying: 41/256 [MB] (20 MBps) [2024-11-18T12:00:37.116Z] Copying: 56/256 [MB] (14 MBps) [2024-11-18T12:00:38.505Z] Copying: 67/256 [MB] (10 MBps) [2024-11-18T12:00:39.451Z] Copying: 80/256 [MB] (13 MBps) [2024-11-18T12:00:40.392Z] Copying: 98/256 [MB] (18 MBps) [2024-11-18T12:00:41.335Z] Copying: 114/256 [MB] (16 MBps) [2024-11-18T12:00:42.281Z] Copying: 127/256 [MB] (13 MBps) [2024-11-18T12:00:43.223Z] Copying: 145/256 [MB] (17 MBps) [2024-11-18T12:00:44.167Z] Copying: 157/256 [MB] (12 MBps) [2024-11-18T12:00:45.112Z] Copying: 175/256 [MB] (18 MBps) [2024-11-18T12:00:46.501Z] Copying: 191/256 [MB] (15 MBps) [2024-11-18T12:00:47.446Z] Copying: 205/256 [MB] (14 MBps) [2024-11-18T12:00:48.388Z] Copying: 217/256 [MB] (11 MBps) [2024-11-18T12:00:49.333Z] Copying: 229/256 [MB] (12 MBps) [2024-11-18T12:00:50.276Z] Copying: 240/256 [MB] (10 MBps) [2024-11-18T12:00:50.850Z] Copying: 250/256 [MB] (10 MBps) [2024-11-18T12:00:50.850Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-18 12:00:50.619836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:53.149 [2024-11-18 12:00:50.630297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.149 [2024-11-18 12:00:50.630352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:53.149 [2024-11-18 12:00:50.630369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:53.149 [2024-11-18 12:00:50.630386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.149 [2024-11-18 12:00:50.630411] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:53.150 [2024-11-18 12:00:50.633453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.633505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:53.150 [2024-11-18 12:00:50.633518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:17:53.150 [2024-11-18 12:00:50.633526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.633813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.633825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:53.150 [2024-11-18 12:00:50.633836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:53.150 [2024-11-18 12:00:50.633845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.637566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.637603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:53.150 [2024-11-18 12:00:50.637613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.705 ms 00:17:53.150 [2024-11-18 12:00:50.637620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.644550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.644607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:53.150 [2024-11-18 12:00:50.644620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.911 ms 00:17:53.150 [2024-11-18 12:00:50.644628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.670867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.670920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:53.150 [2024-11-18 12:00:50.670934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.171 ms 00:17:53.150 [2024-11-18 12:00:50.670941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.687773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.688000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:53.150 [2024-11-18 12:00:50.688024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.762 ms 00:17:53.150 [2024-11-18 12:00:50.688040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.688193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.688205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:53.150 [2024-11-18 12:00:50.688215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:53.150 [2024-11-18 12:00:50.688223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.714431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.714482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:53.150 [2024-11-18 12:00:50.714494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.182 ms 00:17:53.150 [2024-11-18 12:00:50.714502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.740442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.740501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:53.150 [2024-11-18 12:00:50.740513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.873 ms 00:17:53.150 [2024-11-18 12:00:50.740520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.765890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.766096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:53.150 [2024-11-18 12:00:50.766117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.284 ms 00:17:53.150 [2024-11-18 12:00:50.766125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.791670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.150 [2024-11-18 12:00:50.791720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:53.150 [2024-11-18 12:00:50.791732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.363 ms 00:17:53.150 [2024-11-18 12:00:50.791739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.150 [2024-11-18 12:00:50.791791] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:53.150 [2024-11-18 12:00:50.791807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.791998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.792008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.792016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.792023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.792031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:53.150 [2024-11-18 12:00:50.792039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:53.151 [2024-11-18 12:00:50.792577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:53.152 [2024-11-18 12:00:50.792620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:53.152 [2024-11-18 12:00:50.792629] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef29c1cf-2535-40a8-a97f-5e20363ab578 00:17:53.152 [2024-11-18 12:00:50.792639] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:53.152 [2024-11-18 12:00:50.792647] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:53.152 [2024-11-18 12:00:50.792655] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:53.152 [2024-11-18 12:00:50.792663] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:53.152 [2024-11-18 12:00:50.792689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:53.152 [2024-11-18 12:00:50.792698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:53.152 [2024-11-18 12:00:50.792706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:53.152 [2024-11-18 12:00:50.792713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:53.152 [2024-11-18 12:00:50.792719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:53.152 [2024-11-18 12:00:50.792727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.152 [2024-11-18 12:00:50.792738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:53.152 [2024-11-18 12:00:50.792747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:17:53.152 [2024-11-18 12:00:50.792755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.152 [2024-11-18 12:00:50.806225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.152 [2024-11-18 12:00:50.806267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:53.152 [2024-11-18 12:00:50.806279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.437 ms 00:17:53.152 [2024-11-18 12:00:50.806287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.152 [2024-11-18 12:00:50.806718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.152 [2024-11-18 12:00:50.806731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:53.152 [2024-11-18 12:00:50.806741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:17:53.152 [2024-11-18 12:00:50.806750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:50.846151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:50.846204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.414 [2024-11-18 12:00:50.846215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:50.846223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:50.846315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:50.846325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.414 [2024-11-18 12:00:50.846334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:50.846343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:50.846402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:50.846412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.414 [2024-11-18 12:00:50.846420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:50.846428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:50.846446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:50.846457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.414 [2024-11-18 12:00:50.846464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:50.846472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:50.931128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:50.931184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.414 [2024-11-18 12:00:50.931199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:50.931208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:51.000759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:51.000819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.414 [2024-11-18 12:00:51.000831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:51.000839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:51.000899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:51.000910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.414 [2024-11-18 12:00:51.000919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:51.000927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:51.000959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:51.000970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.414 [2024-11-18 12:00:51.000983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:51.000991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:51.001089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:51.001099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.414 [2024-11-18 12:00:51.001108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:51.001116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:51.001150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.414 [2024-11-18 12:00:51.001160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:53.414 [2024-11-18 12:00:51.001168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.414 [2024-11-18 12:00:51.001180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.414 [2024-11-18 12:00:51.001223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.415 [2024-11-18 12:00:51.001234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.415 [2024-11-18 12:00:51.001242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.415 [2024-11-18 12:00:51.001250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.415 [2024-11-18 12:00:51.001299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.415 [2024-11-18 12:00:51.001310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.415 [2024-11-18 12:00:51.001323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.415 [2024-11-18 12:00:51.001331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.415 [2024-11-18 12:00:51.001490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.180 ms, result 0 00:17:54.359 00:17:54.359 00:17:54.359 12:00:51 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:54.359 12:00:51 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:54.932 12:00:52 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:54.932 [2024-11-18 12:00:52.427799] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:54.932 [2024-11-18 12:00:52.428299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74077 ] 00:17:54.932 [2024-11-18 12:00:52.599327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.193 [2024-11-18 12:00:52.720980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.455 [2024-11-18 12:00:53.009833] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.455 [2024-11-18 12:00:53.009917] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.718 [2024-11-18 12:00:53.172475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.172542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.718 [2024-11-18 12:00:53.172557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:55.718 [2024-11-18 12:00:53.172565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.175541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.175776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.718 [2024-11-18 12:00:53.175798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:17:55.718 [2024-11-18 12:00:53.175807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.176032] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.718 [2024-11-18 12:00:53.176835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.718 [2024-11-18 12:00:53.176877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.176886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.718 [2024-11-18 12:00:53.176896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:17:55.718 [2024-11-18 12:00:53.176904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.178868] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:55.718 [2024-11-18 12:00:53.193469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.193527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:55.718 [2024-11-18 12:00:53.193543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.603 ms 00:17:55.718 [2024-11-18 12:00:53.193551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.193699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.193714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:55.718 [2024-11-18 12:00:53.193724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:55.718 [2024-11-18 12:00:53.193755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.202146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.202194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.718 [2024-11-18 12:00:53.202205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.342 ms 00:17:55.718 [2024-11-18 12:00:53.202214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.202327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.202339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.718 [2024-11-18 12:00:53.202348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:55.718 [2024-11-18 12:00:53.202356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.202386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.202398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.718 [2024-11-18 12:00:53.202406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:55.718 [2024-11-18 12:00:53.202414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.202437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:55.718 [2024-11-18 12:00:53.206747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.206789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.718 [2024-11-18 12:00:53.206800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:17:55.718 [2024-11-18 12:00:53.206809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.206886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.206897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.718 [2024-11-18 12:00:53.206906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:55.718 [2024-11-18 12:00:53.206915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.206937] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:55.718 [2024-11-18 12:00:53.206962] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:55.718 [2024-11-18 12:00:53.207000] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:55.718 [2024-11-18 12:00:53.207017] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:55.718 [2024-11-18 12:00:53.207123] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:55.718 [2024-11-18 12:00:53.207135] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.718 [2024-11-18 12:00:53.207146] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:55.718 [2024-11-18 12:00:53.207157] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.718 [2024-11-18 12:00:53.207170] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.718 [2024-11-18 12:00:53.207179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:55.718 [2024-11-18 12:00:53.207188] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.718 [2024-11-18 12:00:53.207196] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:55.718 [2024-11-18 12:00:53.207204] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:55.718 [2024-11-18 12:00:53.207212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.207220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.718 [2024-11-18 12:00:53.207229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:55.718 [2024-11-18 12:00:53.207236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.207324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.718 [2024-11-18 12:00:53.207334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.718 [2024-11-18 12:00:53.207344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:55.718 [2024-11-18 12:00:53.207351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.718 [2024-11-18 12:00:53.207466] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.718 [2024-11-18 12:00:53.207476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.718 [2024-11-18 12:00:53.207486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.718 [2024-11-18 12:00:53.207495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.719 [2024-11-18 12:00:53.207510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:55.719 [2024-11-18 12:00:53.207525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.719 [2024-11-18 12:00:53.207533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.719 [2024-11-18 12:00:53.207547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.719 [2024-11-18 12:00:53.207554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:55.719 [2024-11-18 12:00:53.207561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.719 [2024-11-18 12:00:53.207577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.719 [2024-11-18 12:00:53.207617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:55.719 [2024-11-18 12:00:53.207624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.719 [2024-11-18 12:00:53.207639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:55.719 [2024-11-18 12:00:53.207647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.719 [2024-11-18 12:00:53.207662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.719 [2024-11-18 12:00:53.207676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.719 [2024-11-18 12:00:53.207683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.719 [2024-11-18 12:00:53.207698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.719 [2024-11-18 12:00:53.207706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.719 [2024-11-18 12:00:53.207719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.719 [2024-11-18 12:00:53.207727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.719 [2024-11-18 12:00:53.207741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.719 [2024-11-18 12:00:53.207748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.719 [2024-11-18 12:00:53.207762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.719 [2024-11-18 12:00:53.207768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:55.719 [2024-11-18 12:00:53.207774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.719 [2024-11-18 12:00:53.207781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:55.719 [2024-11-18 12:00:53.207788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:55.719 [2024-11-18 12:00:53.207794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:55.719 [2024-11-18 12:00:53.207808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:55.719 [2024-11-18 12:00:53.207815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207822] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.719 [2024-11-18 12:00:53.207830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.719 [2024-11-18 12:00:53.207839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.719 [2024-11-18 12:00:53.207852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.719 [2024-11-18 12:00:53.207860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.719 [2024-11-18 12:00:53.207868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.719 [2024-11-18 12:00:53.207875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.719 [2024-11-18 12:00:53.207883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.719 [2024-11-18 12:00:53.207890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.719 [2024-11-18 12:00:53.207896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.719 [2024-11-18 12:00:53.207905] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.719 [2024-11-18 12:00:53.207915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.719 [2024-11-18 12:00:53.207924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:55.719 [2024-11-18 12:00:53.207933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:55.719 [2024-11-18 12:00:53.207941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:55.719 [2024-11-18 12:00:53.207962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:55.719 [2024-11-18 12:00:53.207970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:55.719 [2024-11-18 12:00:53.207978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:55.719 [2024-11-18 12:00:53.207985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:55.719 [2024-11-18 12:00:53.207993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:55.719 [2024-11-18 12:00:53.208000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:55.719 [2024-11-18 12:00:53.208007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:55.719 [2024-11-18 12:00:53.208014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:55.719 [2024-11-18 12:00:53.208022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:55.719 [2024-11-18 12:00:53.208029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:55.719 [2024-11-18 12:00:53.208036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:55.719 [2024-11-18 12:00:53.208044] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.719 [2024-11-18 12:00:53.208052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.719 [2024-11-18 12:00:53.208060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.719 [2024-11-18 12:00:53.208068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.719 [2024-11-18 12:00:53.208076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.719 [2024-11-18 12:00:53.208083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.719 [2024-11-18 12:00:53.208090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.719 [2024-11-18 12:00:53.208100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.719 [2024-11-18 12:00:53.208111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:17:55.719 [2024-11-18 12:00:53.208121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.719 [2024-11-18 12:00:53.240513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.719 [2024-11-18 12:00:53.240567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.719 [2024-11-18 12:00:53.240607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.337 ms 00:17:55.719 [2024-11-18 12:00:53.240616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.719 [2024-11-18 12:00:53.240777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.719 [2024-11-18 12:00:53.240795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:55.719 [2024-11-18 12:00:53.240803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:55.719 [2024-11-18 12:00:53.240811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.719 [2024-11-18 12:00:53.284460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.284516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.720 [2024-11-18 12:00:53.284530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.625 ms 00:17:55.720 [2024-11-18 12:00:53.284543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.284684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.284697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.720 [2024-11-18 12:00:53.284708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.720 [2024-11-18 12:00:53.284717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.285236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.285287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.720 [2024-11-18 12:00:53.285298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:17:55.720 [2024-11-18 12:00:53.285313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.285476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.285487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.720 [2024-11-18 12:00:53.285495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:55.720 [2024-11-18 12:00:53.285504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.302032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.302079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.720 [2024-11-18 12:00:53.302091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.506 ms 00:17:55.720 [2024-11-18 12:00:53.302100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.316319] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:55.720 [2024-11-18 12:00:53.316372] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:55.720 [2024-11-18 12:00:53.316388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.316397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:55.720 [2024-11-18 12:00:53.316407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.166 ms 00:17:55.720 [2024-11-18 12:00:53.316414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.342806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.342866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:55.720 [2024-11-18 12:00:53.342879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.290 ms 00:17:55.720 [2024-11-18 12:00:53.342887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.356100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.356308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:55.720 [2024-11-18 12:00:53.356331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.135 ms 00:17:55.720 [2024-11-18 12:00:53.356338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.369856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.369905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:55.720 [2024-11-18 12:00:53.369919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.351 ms 00:17:55.720 [2024-11-18 12:00:53.369927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.720 [2024-11-18 12:00:53.370615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.720 [2024-11-18 12:00:53.370643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:55.720 [2024-11-18 12:00:53.370654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:17:55.720 [2024-11-18 12:00:53.370663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.437935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.983 [2024-11-18 12:00:53.438006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:55.983 [2024-11-18 12:00:53.438022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.242 ms 00:17:55.983 [2024-11-18 12:00:53.438032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.449503] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:55.983 [2024-11-18 12:00:53.468955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.983 [2024-11-18 12:00:53.469188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:55.983 [2024-11-18 12:00:53.469210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.810 ms 00:17:55.983 [2024-11-18 12:00:53.469220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.469333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.983 [2024-11-18 12:00:53.469345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:55.983 [2024-11-18 12:00:53.469356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:55.983 [2024-11-18 12:00:53.469364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.469423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.983 [2024-11-18 12:00:53.469433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:55.983 [2024-11-18 12:00:53.469442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:55.983 [2024-11-18 12:00:53.469450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.469478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.983 [2024-11-18 12:00:53.469489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:55.983 [2024-11-18 12:00:53.469498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:55.983 [2024-11-18 12:00:53.469507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.469544] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:55.983 [2024-11-18 12:00:53.469555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.983 [2024-11-18 12:00:53.469563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:55.983 [2024-11-18 12:00:53.469572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:55.983 [2024-11-18 12:00:53.469613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.496350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.983 [2024-11-18 12:00:53.496544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:55.983 [2024-11-18 12:00:53.496568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.712 ms 00:17:55.983 [2024-11-18 12:00:53.496577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.497072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.983 [2024-11-18 12:00:53.497122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:55.983 [2024-11-18 12:00:53.497136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:55.983 [2024-11-18 12:00:53.497146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.983 [2024-11-18 12:00:53.499306] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.983 [2024-11-18 12:00:53.503182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 326.479 ms, result 0 00:17:55.983 [2024-11-18 12:00:53.504456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:55.983 [2024-11-18 12:00:53.518252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.245  [2024-11-18T12:00:53.946Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-18 12:00:53.905175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.245 [2024-11-18 12:00:53.914487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.245 [2024-11-18 12:00:53.914538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:56.245 [2024-11-18 12:00:53.914552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.245 [2024-11-18 12:00:53.914569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.245 [2024-11-18 12:00:53.914620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:56.245 [2024-11-18 12:00:53.917578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.245 [2024-11-18 12:00:53.917629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:56.245 [2024-11-18 12:00:53.917642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:17:56.245 [2024-11-18 12:00:53.917650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.245 [2024-11-18 12:00:53.920496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.245 [2024-11-18 12:00:53.920698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:56.245 [2024-11-18 12:00:53.920729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.815 ms 00:17:56.245 [2024-11-18 12:00:53.920737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.245 [2024-11-18 12:00:53.925000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.245 [2024-11-18 12:00:53.925067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:56.245 [2024-11-18 12:00:53.925254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.240 ms 00:17:56.245 [2024-11-18 12:00:53.925278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.245 [2024-11-18 12:00:53.932251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.245 [2024-11-18 12:00:53.932411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:56.245 [2024-11-18 12:00:53.932482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.922 ms 00:17:56.245 [2024-11-18 12:00:53.932506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.507 [2024-11-18 12:00:53.958631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.507 [2024-11-18 12:00:53.958832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:56.507 [2024-11-18 12:00:53.958977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.057 ms 00:17:56.507 [2024-11-18 12:00:53.959004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.507 [2024-11-18 12:00:53.975669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.507 [2024-11-18 12:00:53.975864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:56.507 [2024-11-18 12:00:53.975986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.595 ms 00:17:56.507 [2024-11-18 12:00:53.976012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.507 [2024-11-18 12:00:53.976160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.507 [2024-11-18 12:00:53.976309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:56.507 [2024-11-18 12:00:53.976333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:56.507 [2024-11-18 12:00:53.976353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.507 [2024-11-18 12:00:54.002707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.507 [2024-11-18 12:00:54.002880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:56.507 [2024-11-18 12:00:54.002941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.311 ms 00:17:56.507 [2024-11-18 12:00:54.002962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.507 [2024-11-18 12:00:54.029427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.507 [2024-11-18 12:00:54.029614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:56.508 [2024-11-18 12:00:54.029677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.357 ms 00:17:56.508 [2024-11-18 12:00:54.029699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.508 [2024-11-18 12:00:54.055282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.508 [2024-11-18 12:00:54.055466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:56.508 [2024-11-18 12:00:54.055531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.518 ms 00:17:56.508 [2024-11-18 12:00:54.055552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.508 [2024-11-18 12:00:54.080863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.508 [2024-11-18 12:00:54.081038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:56.508 [2024-11-18 12:00:54.081096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.200 ms 00:17:56.508 [2024-11-18 12:00:54.081117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.508 [2024-11-18 12:00:54.081180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:56.508 [2024-11-18 12:00:54.081212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.081997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.082974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:56.508 [2024-11-18 12:00:54.083561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.083995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:56.509 [2024-11-18 12:00:54.084125] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:56.509 [2024-11-18 12:00:54.084134] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef29c1cf-2535-40a8-a97f-5e20363ab578 00:17:56.509 [2024-11-18 12:00:54.084142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:56.509 [2024-11-18 12:00:54.084150] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:56.509 [2024-11-18 12:00:54.084158] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:56.509 [2024-11-18 12:00:54.084167] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:56.509 [2024-11-18 12:00:54.084174] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:56.509 [2024-11-18 12:00:54.084183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:56.509 [2024-11-18 12:00:54.084191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:56.509 [2024-11-18 12:00:54.084198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:56.509 [2024-11-18 12:00:54.084204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:56.509 [2024-11-18 12:00:54.084212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.509 [2024-11-18 12:00:54.084224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:56.509 [2024-11-18 12:00:54.084235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:17:56.509 [2024-11-18 12:00:54.084242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.509 [2024-11-18 12:00:54.097795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.509 [2024-11-18 12:00:54.097962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:56.509 [2024-11-18 12:00:54.097979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.521 ms 00:17:56.509 [2024-11-18 12:00:54.097987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.509 [2024-11-18 12:00:54.098418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.509 [2024-11-18 12:00:54.098441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:56.509 [2024-11-18 12:00:54.098452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:17:56.509 [2024-11-18 12:00:54.098460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.509 [2024-11-18 12:00:54.137523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.509 [2024-11-18 12:00:54.137721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.509 [2024-11-18 12:00:54.137742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.509 [2024-11-18 12:00:54.137751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.509 [2024-11-18 12:00:54.137846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.509 [2024-11-18 12:00:54.137855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.509 [2024-11-18 12:00:54.137864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.509 [2024-11-18 12:00:54.137872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.509 [2024-11-18 12:00:54.137930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.509 [2024-11-18 12:00:54.137941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.509 [2024-11-18 12:00:54.137949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.509 [2024-11-18 12:00:54.137957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.509 [2024-11-18 12:00:54.137974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.509 [2024-11-18 12:00:54.137987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.509 [2024-11-18 12:00:54.137996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.509 [2024-11-18 12:00:54.138003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.224170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.769 [2024-11-18 12:00:54.224228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.769 [2024-11-18 12:00:54.224242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.769 [2024-11-18 12:00:54.224251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.294743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.769 [2024-11-18 12:00:54.294799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.769 [2024-11-18 12:00:54.294812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.769 [2024-11-18 12:00:54.294821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.294877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.769 [2024-11-18 12:00:54.294886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.769 [2024-11-18 12:00:54.294896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.769 [2024-11-18 12:00:54.294904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.294938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.769 [2024-11-18 12:00:54.294947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.769 [2024-11-18 12:00:54.294963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.769 [2024-11-18 12:00:54.294970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.295068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.769 [2024-11-18 12:00:54.295079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.769 [2024-11-18 12:00:54.295088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.769 [2024-11-18 12:00:54.295095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.295129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.769 [2024-11-18 12:00:54.295139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:56.769 [2024-11-18 12:00:54.295148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.769 [2024-11-18 12:00:54.295159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.295204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.769 [2024-11-18 12:00:54.295214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.769 [2024-11-18 12:00:54.295223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.769 [2024-11-18 12:00:54.295231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.295280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.769 [2024-11-18 12:00:54.295292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.769 [2024-11-18 12:00:54.295304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.769 [2024-11-18 12:00:54.295312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.769 [2024-11-18 12:00:54.295503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.000 ms, result 0 00:17:57.336 00:17:57.336 00:17:57.336 12:00:54 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74107 00:17:57.336 12:00:54 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:57.336 12:00:54 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74107 00:17:57.336 12:00:54 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74107 ']' 00:17:57.336 12:00:54 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.336 12:00:54 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:57.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.336 12:00:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.336 12:00:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:57.336 12:00:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:57.336 [2024-11-18 12:00:54.950797] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:17:57.336 [2024-11-18 12:00:54.950931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74107 ] 00:17:57.595 [2024-11-18 12:00:55.106597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.595 [2024-11-18 12:00:55.194414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.161 12:00:55 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:58.162 12:00:55 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:58.162 12:00:55 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:58.420 [2024-11-18 12:00:55.996766] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:58.420 [2024-11-18 12:00:55.996814] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:58.680 [2024-11-18 12:00:56.160813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.160852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:58.680 [2024-11-18 12:00:56.160864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.680 [2024-11-18 12:00:56.160871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.162925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.162955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.680 [2024-11-18 12:00:56.162964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:17:58.680 [2024-11-18 12:00:56.162970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.163028] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:58.680 [2024-11-18 12:00:56.163549] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:58.680 [2024-11-18 12:00:56.163567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.163573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.680 [2024-11-18 12:00:56.163591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:17:58.680 [2024-11-18 12:00:56.163597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.164577] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:58.680 [2024-11-18 12:00:56.174130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.174163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:58.680 [2024-11-18 12:00:56.174173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.557 ms 00:17:58.680 [2024-11-18 12:00:56.174180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.174244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.174254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:58.680 [2024-11-18 12:00:56.174261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:58.680 [2024-11-18 12:00:56.174268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.178621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.178650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.680 [2024-11-18 12:00:56.178657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.315 ms 00:17:58.680 [2024-11-18 12:00:56.178665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.178747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.178757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.680 [2024-11-18 12:00:56.178763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:58.680 [2024-11-18 12:00:56.178772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.178790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.178797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:58.680 [2024-11-18 12:00:56.178803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.680 [2024-11-18 12:00:56.178810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.178826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:58.680 [2024-11-18 12:00:56.181434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.181459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.680 [2024-11-18 12:00:56.181467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:17:58.680 [2024-11-18 12:00:56.181474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.181502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.181508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:58.680 [2024-11-18 12:00:56.181516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:58.680 [2024-11-18 12:00:56.181523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.181539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:58.680 [2024-11-18 12:00:56.181553] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:58.680 [2024-11-18 12:00:56.181594] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:58.680 [2024-11-18 12:00:56.181606] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:58.680 [2024-11-18 12:00:56.181685] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:58.680 [2024-11-18 12:00:56.181693] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:58.680 [2024-11-18 12:00:56.181705] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:58.680 [2024-11-18 12:00:56.181713] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:58.680 [2024-11-18 12:00:56.181721] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:58.680 [2024-11-18 12:00:56.181727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:58.680 [2024-11-18 12:00:56.181734] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:58.680 [2024-11-18 12:00:56.181739] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:58.680 [2024-11-18 12:00:56.181748] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:58.680 [2024-11-18 12:00:56.181754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.181761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:58.680 [2024-11-18 12:00:56.181766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:17:58.680 [2024-11-18 12:00:56.181773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.181842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-11-18 12:00:56.181849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:58.680 [2024-11-18 12:00:56.181855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:58.680 [2024-11-18 12:00:56.181861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-11-18 12:00:56.181936] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:58.680 [2024-11-18 12:00:56.181944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:58.680 [2024-11-18 12:00:56.181950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.680 [2024-11-18 12:00:56.181957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.680 [2024-11-18 12:00:56.181962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:58.680 [2024-11-18 12:00:56.181969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:58.680 [2024-11-18 12:00:56.181974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:58.680 [2024-11-18 12:00:56.181982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:58.680 [2024-11-18 12:00:56.181988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:58.680 [2024-11-18 12:00:56.181994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.680 [2024-11-18 12:00:56.181999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:58.680 [2024-11-18 12:00:56.182005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:58.680 [2024-11-18 12:00:56.182010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.680 [2024-11-18 12:00:56.182016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:58.681 [2024-11-18 12:00:56.182021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:58.681 [2024-11-18 12:00:56.182027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:58.681 [2024-11-18 12:00:56.182038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:58.681 [2024-11-18 12:00:56.182043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:58.681 [2024-11-18 12:00:56.182058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.681 [2024-11-18 12:00:56.182069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:58.681 [2024-11-18 12:00:56.182077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.681 [2024-11-18 12:00:56.182087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:58.681 [2024-11-18 12:00:56.182092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.681 [2024-11-18 12:00:56.182104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:58.681 [2024-11-18 12:00:56.182109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.681 [2024-11-18 12:00:56.182121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:58.681 [2024-11-18 12:00:56.182126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.681 [2024-11-18 12:00:56.182138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:58.681 [2024-11-18 12:00:56.182145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:58.681 [2024-11-18 12:00:56.182150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.681 [2024-11-18 12:00:56.182156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:58.681 [2024-11-18 12:00:56.182161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:58.681 [2024-11-18 12:00:56.182169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:58.681 [2024-11-18 12:00:56.182187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:58.681 [2024-11-18 12:00:56.182192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182199] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:58.681 [2024-11-18 12:00:56.182206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:58.681 [2024-11-18 12:00:56.182212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.681 [2024-11-18 12:00:56.182218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.681 [2024-11-18 12:00:56.182225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:58.681 [2024-11-18 12:00:56.182230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:58.681 [2024-11-18 12:00:56.182236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:58.681 [2024-11-18 12:00:56.182242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:58.681 [2024-11-18 12:00:56.182247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:58.681 [2024-11-18 12:00:56.182252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:58.681 [2024-11-18 12:00:56.182259] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:58.681 [2024-11-18 12:00:56.182266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.681 [2024-11-18 12:00:56.182275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:58.681 [2024-11-18 12:00:56.182280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:58.681 [2024-11-18 12:00:56.182288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:58.681 [2024-11-18 12:00:56.182293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:58.681 [2024-11-18 12:00:56.182301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:58.681 [2024-11-18 12:00:56.182306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:58.681 [2024-11-18 12:00:56.182313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:58.681 [2024-11-18 12:00:56.182318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:58.681 [2024-11-18 12:00:56.182325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:58.681 [2024-11-18 12:00:56.182330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:58.681 [2024-11-18 12:00:56.182337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:58.681 [2024-11-18 12:00:56.182342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:58.681 [2024-11-18 12:00:56.182349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:58.681 [2024-11-18 12:00:56.182354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:58.681 [2024-11-18 12:00:56.182362] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:58.681 [2024-11-18 12:00:56.182368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.681 [2024-11-18 12:00:56.182376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:58.681 [2024-11-18 12:00:56.182382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:58.681 [2024-11-18 12:00:56.182388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:58.681 [2024-11-18 12:00:56.182394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:58.681 [2024-11-18 12:00:56.182401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-11-18 12:00:56.182407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:58.681 [2024-11-18 12:00:56.182413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:17:58.681 [2024-11-18 12:00:56.182420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-11-18 12:00:56.203252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-11-18 12:00:56.203281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.681 [2024-11-18 12:00:56.203291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.789 ms 00:17:58.681 [2024-11-18 12:00:56.203298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-11-18 12:00:56.203392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-11-18 12:00:56.203413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.681 [2024-11-18 12:00:56.203421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:58.681 [2024-11-18 12:00:56.203426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-11-18 12:00:56.227145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-11-18 12:00:56.227173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.681 [2024-11-18 12:00:56.227182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.700 ms 00:17:58.681 [2024-11-18 12:00:56.227188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-11-18 12:00:56.227233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-11-18 12:00:56.227240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.681 [2024-11-18 12:00:56.227247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:58.681 [2024-11-18 12:00:56.227252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-11-18 12:00:56.227545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-11-18 12:00:56.227562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.681 [2024-11-18 12:00:56.227571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:17:58.681 [2024-11-18 12:00:56.227577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-11-18 12:00:56.227687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.227695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.682 [2024-11-18 12:00:56.227702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:58.682 [2024-11-18 12:00:56.227708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.239264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.239291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.682 [2024-11-18 12:00:56.239300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.539 ms 00:17:58.682 [2024-11-18 12:00:56.239305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.249006] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:58.682 [2024-11-18 12:00:56.249045] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:58.682 [2024-11-18 12:00:56.249055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.249062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:58.682 [2024-11-18 12:00:56.249070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.673 ms 00:17:58.682 [2024-11-18 12:00:56.249076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.267528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.267555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:58.682 [2024-11-18 12:00:56.267566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.403 ms 00:17:58.682 [2024-11-18 12:00:56.267572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.276751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.276778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:58.682 [2024-11-18 12:00:56.276788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.105 ms 00:17:58.682 [2024-11-18 12:00:56.276794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.285520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.285546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:58.682 [2024-11-18 12:00:56.285555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.682 ms 00:17:58.682 [2024-11-18 12:00:56.285560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.286031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.286052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.682 [2024-11-18 12:00:56.286060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:17:58.682 [2024-11-18 12:00:56.286065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.346459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.346503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:58.682 [2024-11-18 12:00:56.346517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.373 ms 00:17:58.682 [2024-11-18 12:00:56.346524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.354340] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:58.682 [2024-11-18 12:00:56.365728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.365763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.682 [2024-11-18 12:00:56.365775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.130 ms 00:17:58.682 [2024-11-18 12:00:56.365785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.365842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.365851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:58.682 [2024-11-18 12:00:56.365858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:58.682 [2024-11-18 12:00:56.365865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.365903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.365912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.682 [2024-11-18 12:00:56.365918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:58.682 [2024-11-18 12:00:56.365926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.365945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.365952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.682 [2024-11-18 12:00:56.365958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.682 [2024-11-18 12:00:56.365967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.682 [2024-11-18 12:00:56.365990] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:58.682 [2024-11-18 12:00:56.366002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.682 [2024-11-18 12:00:56.366008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:58.682 [2024-11-18 12:00:56.366015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:58.682 [2024-11-18 12:00:56.366020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.941 [2024-11-18 12:00:56.383998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.941 [2024-11-18 12:00:56.384026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.941 [2024-11-18 12:00:56.384037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:17:58.941 [2024-11-18 12:00:56.384043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.941 [2024-11-18 12:00:56.384113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.941 [2024-11-18 12:00:56.384121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.941 [2024-11-18 12:00:56.384130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:58.941 [2024-11-18 12:00:56.384137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.941 [2024-11-18 12:00:56.384783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:58.941 [2024-11-18 12:00:56.387055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.733 ms, result 0 00:17:58.941 [2024-11-18 12:00:56.387947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.941 Some configs were skipped because the RPC state that can call them passed over. 00:17:58.941 12:00:56 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:58.941 [2024-11-18 12:00:56.608258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.941 [2024-11-18 12:00:56.608298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:58.941 [2024-11-18 12:00:56.608307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:17:58.941 [2024-11-18 12:00:56.608315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.941 [2024-11-18 12:00:56.608341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.326 ms, result 0 00:17:58.941 true 00:17:58.941 12:00:56 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:59.199 [2024-11-18 12:00:56.800344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.199 [2024-11-18 12:00:56.800374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:59.199 [2024-11-18 12:00:56.800383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:17:59.199 [2024-11-18 12:00:56.800389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.199 [2024-11-18 12:00:56.800416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.211 ms, result 0 00:17:59.199 true 00:17:59.200 12:00:56 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74107 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74107 ']' 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74107 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74107 00:17:59.200 killing process with pid 74107 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74107' 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74107 00:17:59.200 12:00:56 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74107 00:17:59.767 [2024-11-18 12:00:57.385875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.767 [2024-11-18 12:00:57.385925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:59.767 [2024-11-18 12:00:57.385937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:59.767 [2024-11-18 12:00:57.385944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.767 [2024-11-18 12:00:57.385963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:59.767 [2024-11-18 12:00:57.388127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.767 [2024-11-18 12:00:57.388153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:59.767 [2024-11-18 12:00:57.388165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:17:59.767 [2024-11-18 12:00:57.388171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.767 [2024-11-18 12:00:57.388388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.767 [2024-11-18 12:00:57.388395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:59.767 [2024-11-18 12:00:57.388403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:17:59.767 [2024-11-18 12:00:57.388408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.767 [2024-11-18 12:00:57.391694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.391719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:59.768 [2024-11-18 12:00:57.391729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:17:59.768 [2024-11-18 12:00:57.391735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.396965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.397000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:59.768 [2024-11-18 12:00:57.397010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.200 ms 00:17:59.768 [2024-11-18 12:00:57.397016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.404304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.404329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.768 [2024-11-18 12:00:57.404339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.243 ms 00:17:59.768 [2024-11-18 12:00:57.404349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.410973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.411003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.768 [2024-11-18 12:00:57.411012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.591 ms 00:17:59.768 [2024-11-18 12:00:57.411019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.411126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.411133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.768 [2024-11-18 12:00:57.411141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:59.768 [2024-11-18 12:00:57.411147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.418769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.418794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:59.768 [2024-11-18 12:00:57.418802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.606 ms 00:17:59.768 [2024-11-18 12:00:57.418807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.426205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.426230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:59.768 [2024-11-18 12:00:57.426241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.366 ms 00:17:59.768 [2024-11-18 12:00:57.426246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.433141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.433166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.768 [2024-11-18 12:00:57.433177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.864 ms 00:17:59.768 [2024-11-18 12:00:57.433182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.440108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.768 [2024-11-18 12:00:57.440133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.768 [2024-11-18 12:00:57.440142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.876 ms 00:17:59.768 [2024-11-18 12:00:57.440148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.768 [2024-11-18 12:00:57.440177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.768 [2024-11-18 12:00:57.440189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.768 [2024-11-18 12:00:57.440494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.769 [2024-11-18 12:00:57.440851] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.769 [2024-11-18 12:00:57.440863] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef29c1cf-2535-40a8-a97f-5e20363ab578 00:17:59.769 [2024-11-18 12:00:57.440873] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.769 [2024-11-18 12:00:57.440880] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.769 [2024-11-18 12:00:57.440885] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.769 [2024-11-18 12:00:57.440891] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.769 [2024-11-18 12:00:57.440896] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.769 [2024-11-18 12:00:57.440903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.769 [2024-11-18 12:00:57.440909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.769 [2024-11-18 12:00:57.440915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.769 [2024-11-18 12:00:57.440919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.769 [2024-11-18 12:00:57.440926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.769 [2024-11-18 12:00:57.440932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.769 [2024-11-18 12:00:57.440939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:17:59.769 [2024-11-18 12:00:57.440945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.769 [2024-11-18 12:00:57.450563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.769 [2024-11-18 12:00:57.450594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.769 [2024-11-18 12:00:57.450605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.599 ms 00:17:59.769 [2024-11-18 12:00:57.450611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.769 [2024-11-18 12:00:57.450892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.769 [2024-11-18 12:00:57.450905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.769 [2024-11-18 12:00:57.450914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:59.769 [2024-11-18 12:00:57.450920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.485867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.485895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.028 [2024-11-18 12:00:57.485904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.485911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.485985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.485993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.028 [2024-11-18 12:00:57.486002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.486008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.486040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.486047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.028 [2024-11-18 12:00:57.486055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.486061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.486076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.486081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.028 [2024-11-18 12:00:57.486088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.486093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.545065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.545096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.028 [2024-11-18 12:00:57.545107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.545112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.593157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.593193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.028 [2024-11-18 12:00:57.593202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.593209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.593269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.593276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.028 [2024-11-18 12:00:57.593285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.593290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.593315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.593321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.028 [2024-11-18 12:00:57.593329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.593334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.593405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.593412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.028 [2024-11-18 12:00:57.593419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.593425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.593451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.593458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:00.028 [2024-11-18 12:00:57.593465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.593470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.593501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.593508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.028 [2024-11-18 12:00:57.593516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.593523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.593557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.028 [2024-11-18 12:00:57.593565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.028 [2024-11-18 12:00:57.593572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.028 [2024-11-18 12:00:57.593578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.028 [2024-11-18 12:00:57.593696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 207.804 ms, result 0 00:18:00.680 12:00:58 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:00.680 [2024-11-18 12:00:58.165535] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:00.680 [2024-11-18 12:00:58.165671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74154 ] 00:18:00.680 [2024-11-18 12:00:58.321393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.939 [2024-11-18 12:00:58.405633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.940 [2024-11-18 12:00:58.611844] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:00.940 [2024-11-18 12:00:58.611897] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:01.199 [2024-11-18 12:00:58.763571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.199 [2024-11-18 12:00:58.763620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:01.199 [2024-11-18 12:00:58.763634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:01.199 [2024-11-18 12:00:58.763642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.199 [2024-11-18 12:00:58.765728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.199 [2024-11-18 12:00:58.765761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:01.199 [2024-11-18 12:00:58.765772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:18:01.199 [2024-11-18 12:00:58.765781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.199 [2024-11-18 12:00:58.765858] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:01.199 [2024-11-18 12:00:58.766409] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:01.199 [2024-11-18 12:00:58.766429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.199 [2024-11-18 12:00:58.766439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:01.199 [2024-11-18 12:00:58.766448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:18:01.199 [2024-11-18 12:00:58.766457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.199 [2024-11-18 12:00:58.767539] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:01.199 [2024-11-18 12:00:58.777206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.199 [2024-11-18 12:00:58.777240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:01.199 [2024-11-18 12:00:58.777252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.667 ms 00:18:01.199 [2024-11-18 12:00:58.777260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.199 [2024-11-18 12:00:58.777348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.199 [2024-11-18 12:00:58.777362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:01.199 [2024-11-18 12:00:58.777372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:01.199 [2024-11-18 12:00:58.777383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.199 [2024-11-18 12:00:58.781837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.199 [2024-11-18 12:00:58.781863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:01.199 [2024-11-18 12:00:58.781874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.408 ms 00:18:01.200 [2024-11-18 12:00:58.781883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.200 [2024-11-18 12:00:58.781973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.200 [2024-11-18 12:00:58.781985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:01.200 [2024-11-18 12:00:58.781995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:01.200 [2024-11-18 12:00:58.782005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.200 [2024-11-18 12:00:58.782030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.200 [2024-11-18 12:00:58.782043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:01.200 [2024-11-18 12:00:58.782052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:01.200 [2024-11-18 12:00:58.782062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.200 [2024-11-18 12:00:58.782083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:01.200 [2024-11-18 12:00:58.784860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.200 [2024-11-18 12:00:58.784889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:01.200 [2024-11-18 12:00:58.784900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.782 ms 00:18:01.200 [2024-11-18 12:00:58.784909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.200 [2024-11-18 12:00:58.784947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.200 [2024-11-18 12:00:58.784957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:01.200 [2024-11-18 12:00:58.784967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:01.200 [2024-11-18 12:00:58.784976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.200 [2024-11-18 12:00:58.784996] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:01.200 [2024-11-18 12:00:58.785020] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:01.200 [2024-11-18 12:00:58.785058] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:01.200 [2024-11-18 12:00:58.785076] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:01.200 [2024-11-18 12:00:58.785184] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:01.200 [2024-11-18 12:00:58.785197] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:01.200 [2024-11-18 12:00:58.785210] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:01.200 [2024-11-18 12:00:58.785223] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785237] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785247] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:01.200 [2024-11-18 12:00:58.785256] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:01.200 [2024-11-18 12:00:58.785266] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:01.200 [2024-11-18 12:00:58.785274] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:01.200 [2024-11-18 12:00:58.785284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.200 [2024-11-18 12:00:58.785293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:01.200 [2024-11-18 12:00:58.785303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:01.200 [2024-11-18 12:00:58.785312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.200 [2024-11-18 12:00:58.785405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.200 [2024-11-18 12:00:58.785416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:01.200 [2024-11-18 12:00:58.785427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:01.200 [2024-11-18 12:00:58.785436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.200 [2024-11-18 12:00:58.785541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:01.200 [2024-11-18 12:00:58.785553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:01.200 [2024-11-18 12:00:58.785564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:01.200 [2024-11-18 12:00:58.785603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:01.200 [2024-11-18 12:00:58.785629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.200 [2024-11-18 12:00:58.785647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:01.200 [2024-11-18 12:00:58.785656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:01.200 [2024-11-18 12:00:58.785664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.200 [2024-11-18 12:00:58.785678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:01.200 [2024-11-18 12:00:58.785688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:01.200 [2024-11-18 12:00:58.785696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:01.200 [2024-11-18 12:00:58.785713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:01.200 [2024-11-18 12:00:58.785740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:01.200 [2024-11-18 12:00:58.785766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:01.200 [2024-11-18 12:00:58.785791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:01.200 [2024-11-18 12:00:58.785817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:01.200 [2024-11-18 12:00:58.785842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.200 [2024-11-18 12:00:58.785859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:01.200 [2024-11-18 12:00:58.785867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:01.200 [2024-11-18 12:00:58.785875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.200 [2024-11-18 12:00:58.785884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:01.200 [2024-11-18 12:00:58.785893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:01.200 [2024-11-18 12:00:58.785901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:01.200 [2024-11-18 12:00:58.785919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:01.200 [2024-11-18 12:00:58.785928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785936] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:01.200 [2024-11-18 12:00:58.785946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:01.200 [2024-11-18 12:00:58.785955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.200 [2024-11-18 12:00:58.785966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.200 [2024-11-18 12:00:58.785976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:01.200 [2024-11-18 12:00:58.785985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:01.200 [2024-11-18 12:00:58.785993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:01.200 [2024-11-18 12:00:58.786002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:01.200 [2024-11-18 12:00:58.786010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:01.200 [2024-11-18 12:00:58.786019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:01.201 [2024-11-18 12:00:58.786029] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:01.201 [2024-11-18 12:00:58.786041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.201 [2024-11-18 12:00:58.786051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:01.201 [2024-11-18 12:00:58.786061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:01.201 [2024-11-18 12:00:58.786070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:01.201 [2024-11-18 12:00:58.786080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:01.201 [2024-11-18 12:00:58.786090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:01.201 [2024-11-18 12:00:58.786099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:01.201 [2024-11-18 12:00:58.786109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:01.201 [2024-11-18 12:00:58.786118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:01.201 [2024-11-18 12:00:58.786128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:01.201 [2024-11-18 12:00:58.786137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:01.201 [2024-11-18 12:00:58.786146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:01.201 [2024-11-18 12:00:58.786156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:01.201 [2024-11-18 12:00:58.786165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:01.201 [2024-11-18 12:00:58.786174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:01.201 [2024-11-18 12:00:58.786184] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:01.201 [2024-11-18 12:00:58.786194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.201 [2024-11-18 12:00:58.786204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:01.201 [2024-11-18 12:00:58.786214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:01.201 [2024-11-18 12:00:58.786224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:01.201 [2024-11-18 12:00:58.786233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:01.201 [2024-11-18 12:00:58.786243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.786252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:01.201 [2024-11-18 12:00:58.786266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:18:01.201 [2024-11-18 12:00:58.786275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.807180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.807211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:01.201 [2024-11-18 12:00:58.807222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.837 ms 00:18:01.201 [2024-11-18 12:00:58.807231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.807351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.807372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:01.201 [2024-11-18 12:00:58.807382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:01.201 [2024-11-18 12:00:58.807390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.847079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.847193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:01.201 [2024-11-18 12:00:58.847211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.657 ms 00:18:01.201 [2024-11-18 12:00:58.847226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.847298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.847311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.201 [2024-11-18 12:00:58.847323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:01.201 [2024-11-18 12:00:58.847332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.847679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.847699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.201 [2024-11-18 12:00:58.847710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:18:01.201 [2024-11-18 12:00:58.847718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.847863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.847880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.201 [2024-11-18 12:00:58.847890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:18:01.201 [2024-11-18 12:00:58.847899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.858788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.858815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.201 [2024-11-18 12:00:58.858826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.866 ms 00:18:01.201 [2024-11-18 12:00:58.858834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.868601] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:01.201 [2024-11-18 12:00:58.868629] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:01.201 [2024-11-18 12:00:58.868642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.868651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:01.201 [2024-11-18 12:00:58.868659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.680 ms 00:18:01.201 [2024-11-18 12:00:58.868667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.887358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.887401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:01.201 [2024-11-18 12:00:58.887416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.632 ms 00:18:01.201 [2024-11-18 12:00:58.887425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.201 [2024-11-18 12:00:58.896555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.201 [2024-11-18 12:00:58.896598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:01.201 [2024-11-18 12:00:58.896611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.058 ms 00:18:01.201 [2024-11-18 12:00:58.896619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.905496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.905523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:01.460 [2024-11-18 12:00:58.905535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.821 ms 00:18:01.460 [2024-11-18 12:00:58.905543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.906101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.906126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:01.460 [2024-11-18 12:00:58.906137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:18:01.460 [2024-11-18 12:00:58.906145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.950219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.950254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:01.460 [2024-11-18 12:00:58.950268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.046 ms 00:18:01.460 [2024-11-18 12:00:58.950277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.958934] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:01.460 [2024-11-18 12:00:58.970853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.970883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:01.460 [2024-11-18 12:00:58.970896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.492 ms 00:18:01.460 [2024-11-18 12:00:58.970904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.970999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.971012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:01.460 [2024-11-18 12:00:58.971023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:01.460 [2024-11-18 12:00:58.971032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.971086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.971097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:01.460 [2024-11-18 12:00:58.971108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:01.460 [2024-11-18 12:00:58.971117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.971144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.971157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:01.460 [2024-11-18 12:00:58.971167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:01.460 [2024-11-18 12:00:58.971177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.971212] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:01.460 [2024-11-18 12:00:58.971224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.971233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:01.460 [2024-11-18 12:00:58.971242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:01.460 [2024-11-18 12:00:58.971252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.990034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.990065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:01.460 [2024-11-18 12:00:58.990079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.760 ms 00:18:01.460 [2024-11-18 12:00:58.990088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.990182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.460 [2024-11-18 12:00:58.990194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:01.460 [2024-11-18 12:00:58.990205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:01.460 [2024-11-18 12:00:58.990215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.460 [2024-11-18 12:00:58.990885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:01.460 [2024-11-18 12:00:58.993319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.097 ms, result 0 00:18:01.460 [2024-11-18 12:00:58.993974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:01.460 [2024-11-18 12:00:59.008787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:02.410  [2024-11-18T12:01:01.494Z] Copying: 16/256 [MB] (16 MBps) [2024-11-18T12:01:02.065Z] Copying: 38/256 [MB] (21 MBps) [2024-11-18T12:01:03.452Z] Copying: 49/256 [MB] (11 MBps) [2024-11-18T12:01:04.396Z] Copying: 67/256 [MB] (18 MBps) [2024-11-18T12:01:05.338Z] Copying: 78/256 [MB] (11 MBps) [2024-11-18T12:01:06.284Z] Copying: 91/256 [MB] (12 MBps) [2024-11-18T12:01:07.229Z] Copying: 106/256 [MB] (15 MBps) [2024-11-18T12:01:08.171Z] Copying: 119/256 [MB] (12 MBps) [2024-11-18T12:01:09.113Z] Copying: 131/256 [MB] (12 MBps) [2024-11-18T12:01:10.055Z] Copying: 153/256 [MB] (21 MBps) [2024-11-18T12:01:11.437Z] Copying: 165/256 [MB] (12 MBps) [2024-11-18T12:01:12.381Z] Copying: 182/256 [MB] (17 MBps) [2024-11-18T12:01:13.326Z] Copying: 194/256 [MB] (12 MBps) [2024-11-18T12:01:14.270Z] Copying: 209/256 [MB] (14 MBps) [2024-11-18T12:01:15.212Z] Copying: 220/256 [MB] (11 MBps) [2024-11-18T12:01:16.156Z] Copying: 233/256 [MB] (13 MBps) [2024-11-18T12:01:17.109Z] Copying: 245/256 [MB] (11 MBps) [2024-11-18T12:01:17.371Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-18 12:01:17.171162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:19.670 [2024-11-18 12:01:17.182716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.182775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:19.670 [2024-11-18 12:01:17.182793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:19.670 [2024-11-18 12:01:17.182811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.182842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:19.670 [2024-11-18 12:01:17.186185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.186400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:19.670 [2024-11-18 12:01:17.186424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:18:19.670 [2024-11-18 12:01:17.186434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.186765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.186781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:19.670 [2024-11-18 12:01:17.186792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:19.670 [2024-11-18 12:01:17.186800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.190832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.190867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:19.670 [2024-11-18 12:01:17.190878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.014 ms 00:18:19.670 [2024-11-18 12:01:17.190886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.198200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.198243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:19.670 [2024-11-18 12:01:17.198256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.293 ms 00:18:19.670 [2024-11-18 12:01:17.198265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.226226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.226282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:19.670 [2024-11-18 12:01:17.226297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.883 ms 00:18:19.670 [2024-11-18 12:01:17.226306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.242444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.242501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:19.670 [2024-11-18 12:01:17.242514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.062 ms 00:18:19.670 [2024-11-18 12:01:17.242527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.242713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.242728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:19.670 [2024-11-18 12:01:17.242740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:19.670 [2024-11-18 12:01:17.242749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.269344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.269561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:19.670 [2024-11-18 12:01:17.269604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.565 ms 00:18:19.670 [2024-11-18 12:01:17.269613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.295884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.296091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:19.670 [2024-11-18 12:01:17.296113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.206 ms 00:18:19.670 [2024-11-18 12:01:17.296121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.321501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.321551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:19.670 [2024-11-18 12:01:17.321564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.313 ms 00:18:19.670 [2024-11-18 12:01:17.321572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.347240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.670 [2024-11-18 12:01:17.347444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:19.670 [2024-11-18 12:01:17.347467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.540 ms 00:18:19.670 [2024-11-18 12:01:17.347475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.670 [2024-11-18 12:01:17.347640] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:19.670 [2024-11-18 12:01:17.347664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:19.670 [2024-11-18 12:01:17.347790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.347997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:19.671 [2024-11-18 12:01:17.348416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:19.672 [2024-11-18 12:01:17.348511] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:19.672 [2024-11-18 12:01:17.348521] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef29c1cf-2535-40a8-a97f-5e20363ab578 00:18:19.672 [2024-11-18 12:01:17.348529] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:19.672 [2024-11-18 12:01:17.348537] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:19.672 [2024-11-18 12:01:17.348545] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:19.672 [2024-11-18 12:01:17.348554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:19.672 [2024-11-18 12:01:17.348564] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:19.672 [2024-11-18 12:01:17.348572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:19.672 [2024-11-18 12:01:17.348596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:19.672 [2024-11-18 12:01:17.348604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:19.672 [2024-11-18 12:01:17.348611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:19.672 [2024-11-18 12:01:17.348620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.672 [2024-11-18 12:01:17.348632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:19.672 [2024-11-18 12:01:17.348642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:18:19.672 [2024-11-18 12:01:17.348651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.672 [2024-11-18 12:01:17.363263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.672 [2024-11-18 12:01:17.363310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:19.672 [2024-11-18 12:01:17.363323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.589 ms 00:18:19.672 [2024-11-18 12:01:17.363332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.672 [2024-11-18 12:01:17.363796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.672 [2024-11-18 12:01:17.363816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:19.672 [2024-11-18 12:01:17.363827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:18:19.672 [2024-11-18 12:01:17.363835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.403116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.403166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.933 [2024-11-18 12:01:17.403178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.403187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.403298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.403309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.933 [2024-11-18 12:01:17.403318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.403326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.403386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.403397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.933 [2024-11-18 12:01:17.403406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.403446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.403465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.403477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.933 [2024-11-18 12:01:17.403486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.403494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.487824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.487881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.933 [2024-11-18 12:01:17.487894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.487903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.556826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.557106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.933 [2024-11-18 12:01:17.557127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.557136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.557221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.557231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.933 [2024-11-18 12:01:17.557240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.557249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.557281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.557291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.933 [2024-11-18 12:01:17.557304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.557313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.557420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.557431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.933 [2024-11-18 12:01:17.557441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.933 [2024-11-18 12:01:17.557449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.933 [2024-11-18 12:01:17.557485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.933 [2024-11-18 12:01:17.557496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:19.933 [2024-11-18 12:01:17.557505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.934 [2024-11-18 12:01:17.557516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.934 [2024-11-18 12:01:17.557562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.934 [2024-11-18 12:01:17.557571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.934 [2024-11-18 12:01:17.557608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.934 [2024-11-18 12:01:17.557618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.934 [2024-11-18 12:01:17.557668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.934 [2024-11-18 12:01:17.557680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.934 [2024-11-18 12:01:17.557692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.934 [2024-11-18 12:01:17.557700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.934 [2024-11-18 12:01:17.557863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.144 ms, result 0 00:18:20.897 00:18:20.897 00:18:20.897 12:01:18 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:21.469 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:21.469 12:01:18 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:21.469 12:01:18 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:21.469 12:01:18 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:21.469 12:01:18 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.469 12:01:18 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:21.469 12:01:18 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:21.469 Process with pid 74107 is not found 00:18:21.469 12:01:18 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74107 00:18:21.469 12:01:18 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74107 ']' 00:18:21.469 12:01:18 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74107 00:18:21.469 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74107) - No such process 00:18:21.469 12:01:18 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74107 is not found' 00:18:21.469 ************************************ 00:18:21.469 END TEST ftl_trim 00:18:21.469 ************************************ 00:18:21.469 00:18:21.469 real 1m19.041s 00:18:21.469 user 1m35.582s 00:18:21.469 sys 0m14.641s 00:18:21.469 12:01:18 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:21.469 12:01:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:21.469 12:01:19 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:21.469 12:01:19 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:21.469 12:01:19 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:21.469 12:01:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:21.469 ************************************ 00:18:21.469 START TEST ftl_restore 00:18:21.469 ************************************ 00:18:21.469 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:21.469 * Looking for test storage... 00:18:21.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.469 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:21.469 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:18:21.469 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:21.731 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.731 12:01:19 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:21.731 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.731 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:21.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.731 --rc genhtml_branch_coverage=1 00:18:21.731 --rc genhtml_function_coverage=1 00:18:21.731 --rc genhtml_legend=1 00:18:21.731 --rc geninfo_all_blocks=1 00:18:21.731 --rc geninfo_unexecuted_blocks=1 00:18:21.731 00:18:21.731 ' 00:18:21.731 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:21.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.731 --rc genhtml_branch_coverage=1 00:18:21.731 --rc genhtml_function_coverage=1 00:18:21.731 --rc genhtml_legend=1 00:18:21.731 --rc geninfo_all_blocks=1 00:18:21.731 --rc geninfo_unexecuted_blocks=1 00:18:21.731 00:18:21.731 ' 00:18:21.731 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:21.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.731 --rc genhtml_branch_coverage=1 00:18:21.731 --rc genhtml_function_coverage=1 00:18:21.731 --rc genhtml_legend=1 00:18:21.731 --rc geninfo_all_blocks=1 00:18:21.731 --rc geninfo_unexecuted_blocks=1 00:18:21.731 00:18:21.731 ' 00:18:21.731 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:21.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.731 --rc genhtml_branch_coverage=1 00:18:21.731 --rc genhtml_function_coverage=1 00:18:21.731 --rc genhtml_legend=1 00:18:21.731 --rc geninfo_all_blocks=1 00:18:21.731 --rc geninfo_unexecuted_blocks=1 00:18:21.731 00:18:21.731 ' 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:21.731 12:01:19 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.0fBeh3ELaT 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74435 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74435 00:18:21.732 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74435 ']' 00:18:21.732 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.732 12:01:19 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.732 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:21.732 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.732 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:21.732 12:01:19 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:21.732 [2024-11-18 12:01:19.338856] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:21.732 [2024-11-18 12:01:19.339226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74435 ] 00:18:21.993 [2024-11-18 12:01:19.500851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.993 [2024-11-18 12:01:19.619226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.937 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.937 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:22.937 12:01:20 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:22.937 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:22.937 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:22.937 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:22.937 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:22.937 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:23.198 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:23.198 { 00:18:23.198 "name": "nvme0n1", 00:18:23.198 "aliases": [ 00:18:23.198 "ed1e259f-6217-4253-8413-632dbc3ac313" 00:18:23.198 ], 00:18:23.198 "product_name": "NVMe disk", 00:18:23.198 "block_size": 4096, 00:18:23.198 "num_blocks": 1310720, 00:18:23.198 "uuid": "ed1e259f-6217-4253-8413-632dbc3ac313", 00:18:23.198 "numa_id": -1, 00:18:23.198 "assigned_rate_limits": { 00:18:23.198 "rw_ios_per_sec": 0, 00:18:23.198 "rw_mbytes_per_sec": 0, 00:18:23.198 "r_mbytes_per_sec": 0, 00:18:23.198 "w_mbytes_per_sec": 0 00:18:23.198 }, 00:18:23.198 "claimed": true, 00:18:23.198 "claim_type": "read_many_write_one", 00:18:23.198 "zoned": false, 00:18:23.198 "supported_io_types": { 00:18:23.198 "read": true, 00:18:23.198 "write": true, 00:18:23.198 "unmap": true, 00:18:23.198 "flush": true, 00:18:23.198 "reset": true, 00:18:23.198 "nvme_admin": true, 00:18:23.198 "nvme_io": true, 00:18:23.198 "nvme_io_md": false, 00:18:23.198 "write_zeroes": true, 00:18:23.198 "zcopy": false, 00:18:23.198 "get_zone_info": false, 00:18:23.198 "zone_management": false, 00:18:23.198 "zone_append": false, 00:18:23.198 "compare": true, 00:18:23.198 "compare_and_write": false, 00:18:23.198 "abort": true, 00:18:23.198 "seek_hole": false, 00:18:23.198 "seek_data": false, 00:18:23.198 "copy": true, 00:18:23.198 "nvme_iov_md": false 00:18:23.198 }, 00:18:23.198 "driver_specific": { 00:18:23.198 "nvme": [ 00:18:23.198 { 00:18:23.198 "pci_address": "0000:00:11.0", 00:18:23.198 "trid": { 00:18:23.198 "trtype": "PCIe", 00:18:23.198 "traddr": "0000:00:11.0" 00:18:23.198 }, 00:18:23.198 "ctrlr_data": { 00:18:23.198 "cntlid": 0, 00:18:23.198 "vendor_id": "0x1b36", 00:18:23.198 "model_number": "QEMU NVMe Ctrl", 00:18:23.198 "serial_number": "12341", 00:18:23.198 "firmware_revision": "8.0.0", 00:18:23.198 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:23.198 "oacs": { 00:18:23.199 "security": 0, 00:18:23.199 "format": 1, 00:18:23.199 "firmware": 0, 00:18:23.199 "ns_manage": 1 00:18:23.199 }, 00:18:23.199 "multi_ctrlr": false, 00:18:23.199 "ana_reporting": false 00:18:23.199 }, 00:18:23.199 "vs": { 00:18:23.199 "nvme_version": "1.4" 00:18:23.199 }, 00:18:23.199 "ns_data": { 00:18:23.199 "id": 1, 00:18:23.199 "can_share": false 00:18:23.199 } 00:18:23.199 } 00:18:23.199 ], 00:18:23.199 "mp_policy": "active_passive" 00:18:23.199 } 00:18:23.199 } 00:18:23.199 ]' 00:18:23.199 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:23.199 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:23.199 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:23.459 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:23.459 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:23.459 12:01:20 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:18:23.459 12:01:20 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:23.459 12:01:20 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:23.459 12:01:20 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:23.459 12:01:20 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:23.459 12:01:20 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:23.459 12:01:21 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=fcd531bb-7b0c-4f88-ac97-066ab9e6ba94 00:18:23.459 12:01:21 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:23.459 12:01:21 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fcd531bb-7b0c-4f88-ac97-066ab9e6ba94 00:18:23.719 12:01:21 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:23.977 12:01:21 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=3ada74ae-4f7d-400a-be50-119bafb80292 00:18:23.977 12:01:21 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3ada74ae-4f7d-400a-be50-119bafb80292 00:18:24.236 12:01:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:24.236 12:01:21 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:24.236 12:01:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:24.236 12:01:21 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:24.236 12:01:21 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:24.236 12:01:21 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:24.236 12:01:21 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:24.236 12:01:21 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:24.236 12:01:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:24.236 12:01:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:24.236 12:01:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:24.236 12:01:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:24.236 12:01:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:24.495 12:01:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:24.495 { 00:18:24.495 "name": "7587e8c8-1903-4a2b-b347-a8cffe442331", 00:18:24.495 "aliases": [ 00:18:24.495 "lvs/nvme0n1p0" 00:18:24.495 ], 00:18:24.495 "product_name": "Logical Volume", 00:18:24.495 "block_size": 4096, 00:18:24.495 "num_blocks": 26476544, 00:18:24.495 "uuid": "7587e8c8-1903-4a2b-b347-a8cffe442331", 00:18:24.495 "assigned_rate_limits": { 00:18:24.495 "rw_ios_per_sec": 0, 00:18:24.495 "rw_mbytes_per_sec": 0, 00:18:24.495 "r_mbytes_per_sec": 0, 00:18:24.495 "w_mbytes_per_sec": 0 00:18:24.495 }, 00:18:24.495 "claimed": false, 00:18:24.495 "zoned": false, 00:18:24.495 "supported_io_types": { 00:18:24.495 "read": true, 00:18:24.495 "write": true, 00:18:24.495 "unmap": true, 00:18:24.495 "flush": false, 00:18:24.495 "reset": true, 00:18:24.495 "nvme_admin": false, 00:18:24.495 "nvme_io": false, 00:18:24.495 "nvme_io_md": false, 00:18:24.495 "write_zeroes": true, 00:18:24.495 "zcopy": false, 00:18:24.495 "get_zone_info": false, 00:18:24.495 "zone_management": false, 00:18:24.495 "zone_append": false, 00:18:24.495 "compare": false, 00:18:24.495 "compare_and_write": false, 00:18:24.495 "abort": false, 00:18:24.495 "seek_hole": true, 00:18:24.495 "seek_data": true, 00:18:24.495 "copy": false, 00:18:24.495 "nvme_iov_md": false 00:18:24.495 }, 00:18:24.495 "driver_specific": { 00:18:24.495 "lvol": { 00:18:24.495 "lvol_store_uuid": "3ada74ae-4f7d-400a-be50-119bafb80292", 00:18:24.495 "base_bdev": "nvme0n1", 00:18:24.495 "thin_provision": true, 00:18:24.495 "num_allocated_clusters": 0, 00:18:24.495 "snapshot": false, 00:18:24.495 "clone": false, 00:18:24.495 "esnap_clone": false 00:18:24.495 } 00:18:24.495 } 00:18:24.495 } 00:18:24.495 ]' 00:18:24.495 12:01:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:24.495 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:24.495 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:24.495 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:24.495 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:24.495 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:24.495 12:01:22 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:24.495 12:01:22 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:24.495 12:01:22 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:24.836 12:01:22 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:24.836 12:01:22 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:24.836 12:01:22 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:24.836 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:24.836 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:24.836 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:24.836 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:24.836 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:25.095 { 00:18:25.095 "name": "7587e8c8-1903-4a2b-b347-a8cffe442331", 00:18:25.095 "aliases": [ 00:18:25.095 "lvs/nvme0n1p0" 00:18:25.095 ], 00:18:25.095 "product_name": "Logical Volume", 00:18:25.095 "block_size": 4096, 00:18:25.095 "num_blocks": 26476544, 00:18:25.095 "uuid": "7587e8c8-1903-4a2b-b347-a8cffe442331", 00:18:25.095 "assigned_rate_limits": { 00:18:25.095 "rw_ios_per_sec": 0, 00:18:25.095 "rw_mbytes_per_sec": 0, 00:18:25.095 "r_mbytes_per_sec": 0, 00:18:25.095 "w_mbytes_per_sec": 0 00:18:25.095 }, 00:18:25.095 "claimed": false, 00:18:25.095 "zoned": false, 00:18:25.095 "supported_io_types": { 00:18:25.095 "read": true, 00:18:25.095 "write": true, 00:18:25.095 "unmap": true, 00:18:25.095 "flush": false, 00:18:25.095 "reset": true, 00:18:25.095 "nvme_admin": false, 00:18:25.095 "nvme_io": false, 00:18:25.095 "nvme_io_md": false, 00:18:25.095 "write_zeroes": true, 00:18:25.095 "zcopy": false, 00:18:25.095 "get_zone_info": false, 00:18:25.095 "zone_management": false, 00:18:25.095 "zone_append": false, 00:18:25.095 "compare": false, 00:18:25.095 "compare_and_write": false, 00:18:25.095 "abort": false, 00:18:25.095 "seek_hole": true, 00:18:25.095 "seek_data": true, 00:18:25.095 "copy": false, 00:18:25.095 "nvme_iov_md": false 00:18:25.095 }, 00:18:25.095 "driver_specific": { 00:18:25.095 "lvol": { 00:18:25.095 "lvol_store_uuid": "3ada74ae-4f7d-400a-be50-119bafb80292", 00:18:25.095 "base_bdev": "nvme0n1", 00:18:25.095 "thin_provision": true, 00:18:25.095 "num_allocated_clusters": 0, 00:18:25.095 "snapshot": false, 00:18:25.095 "clone": false, 00:18:25.095 "esnap_clone": false 00:18:25.095 } 00:18:25.095 } 00:18:25.095 } 00:18:25.095 ]' 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:25.095 12:01:22 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:25.095 12:01:22 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:25.095 12:01:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:25.095 12:01:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:25.095 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7587e8c8-1903-4a2b-b347-a8cffe442331 00:18:25.353 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:25.353 { 00:18:25.353 "name": "7587e8c8-1903-4a2b-b347-a8cffe442331", 00:18:25.353 "aliases": [ 00:18:25.353 "lvs/nvme0n1p0" 00:18:25.353 ], 00:18:25.353 "product_name": "Logical Volume", 00:18:25.353 "block_size": 4096, 00:18:25.353 "num_blocks": 26476544, 00:18:25.353 "uuid": "7587e8c8-1903-4a2b-b347-a8cffe442331", 00:18:25.353 "assigned_rate_limits": { 00:18:25.353 "rw_ios_per_sec": 0, 00:18:25.353 "rw_mbytes_per_sec": 0, 00:18:25.353 "r_mbytes_per_sec": 0, 00:18:25.353 "w_mbytes_per_sec": 0 00:18:25.353 }, 00:18:25.353 "claimed": false, 00:18:25.353 "zoned": false, 00:18:25.353 "supported_io_types": { 00:18:25.353 "read": true, 00:18:25.353 "write": true, 00:18:25.353 "unmap": true, 00:18:25.353 "flush": false, 00:18:25.353 "reset": true, 00:18:25.353 "nvme_admin": false, 00:18:25.353 "nvme_io": false, 00:18:25.353 "nvme_io_md": false, 00:18:25.353 "write_zeroes": true, 00:18:25.353 "zcopy": false, 00:18:25.353 "get_zone_info": false, 00:18:25.353 "zone_management": false, 00:18:25.353 "zone_append": false, 00:18:25.353 "compare": false, 00:18:25.353 "compare_and_write": false, 00:18:25.353 "abort": false, 00:18:25.353 "seek_hole": true, 00:18:25.353 "seek_data": true, 00:18:25.353 "copy": false, 00:18:25.353 "nvme_iov_md": false 00:18:25.353 }, 00:18:25.353 "driver_specific": { 00:18:25.353 "lvol": { 00:18:25.353 "lvol_store_uuid": "3ada74ae-4f7d-400a-be50-119bafb80292", 00:18:25.353 "base_bdev": "nvme0n1", 00:18:25.353 "thin_provision": true, 00:18:25.353 "num_allocated_clusters": 0, 00:18:25.354 "snapshot": false, 00:18:25.354 "clone": false, 00:18:25.354 "esnap_clone": false 00:18:25.354 } 00:18:25.354 } 00:18:25.354 } 00:18:25.354 ]' 00:18:25.354 12:01:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:25.354 12:01:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:25.354 12:01:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:25.354 12:01:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:25.354 12:01:23 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:25.354 12:01:23 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:25.354 12:01:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:25.354 12:01:23 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7587e8c8-1903-4a2b-b347-a8cffe442331 --l2p_dram_limit 10' 00:18:25.354 12:01:23 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:25.354 12:01:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:25.354 12:01:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:25.354 12:01:23 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:25.354 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:25.354 12:01:23 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7587e8c8-1903-4a2b-b347-a8cffe442331 --l2p_dram_limit 10 -c nvc0n1p0 00:18:25.613 [2024-11-18 12:01:23.234486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.234524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:25.613 [2024-11-18 12:01:23.234537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:25.613 [2024-11-18 12:01:23.234544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.234594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.234603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.613 [2024-11-18 12:01:23.234611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:25.613 [2024-11-18 12:01:23.234616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.234636] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:25.613 [2024-11-18 12:01:23.235165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:25.613 [2024-11-18 12:01:23.235186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.235192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.613 [2024-11-18 12:01:23.235200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:18:25.613 [2024-11-18 12:01:23.235206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.235295] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 460b762a-5884-46ae-bbdc-38ff9e82ccce 00:18:25.613 [2024-11-18 12:01:23.236243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.236261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:25.613 [2024-11-18 12:01:23.236269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:25.613 [2024-11-18 12:01:23.236276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.241064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.241160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.613 [2024-11-18 12:01:23.241216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.754 ms 00:18:25.613 [2024-11-18 12:01:23.241237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.241312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.241333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.613 [2024-11-18 12:01:23.241349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:25.613 [2024-11-18 12:01:23.241367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.241414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.241434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:25.613 [2024-11-18 12:01:23.241501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:25.613 [2024-11-18 12:01:23.241524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.241550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:25.613 [2024-11-18 12:01:23.244428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.244517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.613 [2024-11-18 12:01:23.244562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:18:25.613 [2024-11-18 12:01:23.244579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.244625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.244643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:25.613 [2024-11-18 12:01:23.244659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:25.613 [2024-11-18 12:01:23.244675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.244698] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:25.613 [2024-11-18 12:01:23.244811] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:25.613 [2024-11-18 12:01:23.244883] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:25.613 [2024-11-18 12:01:23.244911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:25.613 [2024-11-18 12:01:23.244937] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:25.613 [2024-11-18 12:01:23.244961] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:25.613 [2024-11-18 12:01:23.245024] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:25.613 [2024-11-18 12:01:23.245040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:25.613 [2024-11-18 12:01:23.245059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:25.613 [2024-11-18 12:01:23.245072] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:25.613 [2024-11-18 12:01:23.245089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.613 [2024-11-18 12:01:23.245104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:25.613 [2024-11-18 12:01:23.245146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:18:25.613 [2024-11-18 12:01:23.245169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.613 [2024-11-18 12:01:23.245247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.614 [2024-11-18 12:01:23.245263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:25.614 [2024-11-18 12:01:23.245279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:25.614 [2024-11-18 12:01:23.245332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.614 [2024-11-18 12:01:23.245435] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:25.614 [2024-11-18 12:01:23.245456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:25.614 [2024-11-18 12:01:23.245474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.614 [2024-11-18 12:01:23.245489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.614 [2024-11-18 12:01:23.245536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:25.614 [2024-11-18 12:01:23.245555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:25.614 [2024-11-18 12:01:23.245572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:25.614 [2024-11-18 12:01:23.245721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:25.614 [2024-11-18 12:01:23.245740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:25.614 [2024-11-18 12:01:23.245756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.614 [2024-11-18 12:01:23.245771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:25.614 [2024-11-18 12:01:23.245785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:25.614 [2024-11-18 12:01:23.245801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.614 [2024-11-18 12:01:23.245815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:25.614 [2024-11-18 12:01:23.245832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:25.614 [2024-11-18 12:01:23.245846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.614 [2024-11-18 12:01:23.245887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:25.614 [2024-11-18 12:01:23.245905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:25.614 [2024-11-18 12:01:23.246149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:25.614 [2024-11-18 12:01:23.246276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.614 [2024-11-18 12:01:23.246306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:25.614 [2024-11-18 12:01:23.246339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.614 [2024-11-18 12:01:23.246354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:25.614 [2024-11-18 12:01:23.246361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.614 [2024-11-18 12:01:23.246373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:25.614 [2024-11-18 12:01:23.246378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.614 [2024-11-18 12:01:23.246390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:25.614 [2024-11-18 12:01:23.246398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.614 [2024-11-18 12:01:23.246410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:25.614 [2024-11-18 12:01:23.246415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:25.614 [2024-11-18 12:01:23.246422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.614 [2024-11-18 12:01:23.246427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:25.614 [2024-11-18 12:01:23.246433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:25.614 [2024-11-18 12:01:23.246438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:25.614 [2024-11-18 12:01:23.246451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:25.614 [2024-11-18 12:01:23.246457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246462] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:25.614 [2024-11-18 12:01:23.246469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:25.614 [2024-11-18 12:01:23.246475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.614 [2024-11-18 12:01:23.246482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.614 [2024-11-18 12:01:23.246487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:25.614 [2024-11-18 12:01:23.246496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:25.614 [2024-11-18 12:01:23.246501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:25.614 [2024-11-18 12:01:23.246509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:25.614 [2024-11-18 12:01:23.246513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:25.614 [2024-11-18 12:01:23.246520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:25.614 [2024-11-18 12:01:23.246528] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:25.614 [2024-11-18 12:01:23.246537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.614 [2024-11-18 12:01:23.246545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:25.614 [2024-11-18 12:01:23.246552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:25.614 [2024-11-18 12:01:23.246558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:25.614 [2024-11-18 12:01:23.246565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:25.614 [2024-11-18 12:01:23.246571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:25.614 [2024-11-18 12:01:23.246578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:25.614 [2024-11-18 12:01:23.246598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:25.614 [2024-11-18 12:01:23.246606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:25.614 [2024-11-18 12:01:23.246611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:25.614 [2024-11-18 12:01:23.246620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:25.614 [2024-11-18 12:01:23.246625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:25.614 [2024-11-18 12:01:23.246632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:25.614 [2024-11-18 12:01:23.246637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:25.614 [2024-11-18 12:01:23.246644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:25.614 [2024-11-18 12:01:23.246649] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:25.614 [2024-11-18 12:01:23.246659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.614 [2024-11-18 12:01:23.246666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:25.614 [2024-11-18 12:01:23.246673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:25.614 [2024-11-18 12:01:23.246678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:25.614 [2024-11-18 12:01:23.246685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:25.614 [2024-11-18 12:01:23.246691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.614 [2024-11-18 12:01:23.246699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:25.614 [2024-11-18 12:01:23.246705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:18:25.614 [2024-11-18 12:01:23.246711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.614 [2024-11-18 12:01:23.246743] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:25.614 [2024-11-18 12:01:23.246753] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:29.824 [2024-11-18 12:01:26.971071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:26.971161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:29.824 [2024-11-18 12:01:26.971181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3724.308 ms 00:18:29.824 [2024-11-18 12:01:26.971193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.002448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.002515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.824 [2024-11-18 12:01:27.002530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.032 ms 00:18:29.824 [2024-11-18 12:01:27.002541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.002738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.002755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:29.824 [2024-11-18 12:01:27.002765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:29.824 [2024-11-18 12:01:27.002782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.038020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.038255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.824 [2024-11-18 12:01:27.038277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.203 ms 00:18:29.824 [2024-11-18 12:01:27.038288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.038325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.038341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.824 [2024-11-18 12:01:27.038350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:29.824 [2024-11-18 12:01:27.038361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.038984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.039010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.824 [2024-11-18 12:01:27.039020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:18:29.824 [2024-11-18 12:01:27.039030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.039145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.039156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.824 [2024-11-18 12:01:27.039168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:29.824 [2024-11-18 12:01:27.039180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.056376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.056425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.824 [2024-11-18 12:01:27.056436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.176 ms 00:18:29.824 [2024-11-18 12:01:27.056446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.069522] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:29.824 [2024-11-18 12:01:27.073296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.073337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:29.824 [2024-11-18 12:01:27.073351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.761 ms 00:18:29.824 [2024-11-18 12:01:27.073360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.168447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.168515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:29.824 [2024-11-18 12:01:27.168536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.049 ms 00:18:29.824 [2024-11-18 12:01:27.168546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.168778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.168797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:29.824 [2024-11-18 12:01:27.168812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:18:29.824 [2024-11-18 12:01:27.168821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.194913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.194963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:29.824 [2024-11-18 12:01:27.194979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.035 ms 00:18:29.824 [2024-11-18 12:01:27.194987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.220018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.220064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:29.824 [2024-11-18 12:01:27.220079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.974 ms 00:18:29.824 [2024-11-18 12:01:27.220087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.220733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.220755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:29.824 [2024-11-18 12:01:27.220767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:18:29.824 [2024-11-18 12:01:27.220778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.301449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.301506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:29.824 [2024-11-18 12:01:27.301529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.608 ms 00:18:29.824 [2024-11-18 12:01:27.301540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.329483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.329534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:29.824 [2024-11-18 12:01:27.329550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.824 ms 00:18:29.824 [2024-11-18 12:01:27.329560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.824 [2024-11-18 12:01:27.355640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.824 [2024-11-18 12:01:27.355701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:29.824 [2024-11-18 12:01:27.355716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.011 ms 00:18:29.825 [2024-11-18 12:01:27.355725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.825 [2024-11-18 12:01:27.382033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.825 [2024-11-18 12:01:27.382081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:29.825 [2024-11-18 12:01:27.382096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.250 ms 00:18:29.825 [2024-11-18 12:01:27.382104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.825 [2024-11-18 12:01:27.382159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.825 [2024-11-18 12:01:27.382168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:29.825 [2024-11-18 12:01:27.382182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:29.825 [2024-11-18 12:01:27.382190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.825 [2024-11-18 12:01:27.382285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.825 [2024-11-18 12:01:27.382297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:29.825 [2024-11-18 12:01:27.382311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:29.825 [2024-11-18 12:01:27.382320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.825 [2024-11-18 12:01:27.383755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4148.720 ms, result 0 00:18:29.825 { 00:18:29.825 "name": "ftl0", 00:18:29.825 "uuid": "460b762a-5884-46ae-bbdc-38ff9e82ccce" 00:18:29.825 } 00:18:29.825 12:01:27 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:29.825 12:01:27 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:30.086 12:01:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:30.086 12:01:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:30.349 [2024-11-18 12:01:27.834854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.834916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:30.349 [2024-11-18 12:01:27.834931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:30.349 [2024-11-18 12:01:27.834948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.834975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:30.349 [2024-11-18 12:01:27.838027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.838070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:30.349 [2024-11-18 12:01:27.838084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.029 ms 00:18:30.349 [2024-11-18 12:01:27.838093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.838368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.838383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:30.349 [2024-11-18 12:01:27.838395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:18:30.349 [2024-11-18 12:01:27.838404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.841671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.841696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:30.349 [2024-11-18 12:01:27.841707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:18:30.349 [2024-11-18 12:01:27.841715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.847938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.847978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:30.349 [2024-11-18 12:01:27.847994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.200 ms 00:18:30.349 [2024-11-18 12:01:27.848002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.874247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.874307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:30.349 [2024-11-18 12:01:27.874322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.163 ms 00:18:30.349 [2024-11-18 12:01:27.874329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.891440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.891488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:30.349 [2024-11-18 12:01:27.891504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.052 ms 00:18:30.349 [2024-11-18 12:01:27.891512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.891707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.891722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:30.349 [2024-11-18 12:01:27.891734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:18:30.349 [2024-11-18 12:01:27.891743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.917730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.917777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:30.349 [2024-11-18 12:01:27.917792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.958 ms 00:18:30.349 [2024-11-18 12:01:27.917800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.943225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.943269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:30.349 [2024-11-18 12:01:27.943283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.371 ms 00:18:30.349 [2024-11-18 12:01:27.943290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.968075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.968120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:30.349 [2024-11-18 12:01:27.968133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.729 ms 00:18:30.349 [2024-11-18 12:01:27.968141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.992669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.349 [2024-11-18 12:01:27.992716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:30.349 [2024-11-18 12:01:27.992729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.417 ms 00:18:30.349 [2024-11-18 12:01:27.992737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.349 [2024-11-18 12:01:27.992784] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:30.349 [2024-11-18 12:01:27.992801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.992990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:30.349 [2024-11-18 12:01:27.993002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:30.350 [2024-11-18 12:01:27.993728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:30.351 [2024-11-18 12:01:27.993744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:30.351 [2024-11-18 12:01:27.993757] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 460b762a-5884-46ae-bbdc-38ff9e82ccce 00:18:30.351 [2024-11-18 12:01:27.993765] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:30.351 [2024-11-18 12:01:27.993780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:30.351 [2024-11-18 12:01:27.993787] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:30.351 [2024-11-18 12:01:27.993797] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:30.351 [2024-11-18 12:01:27.993804] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:30.351 [2024-11-18 12:01:27.993815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:30.351 [2024-11-18 12:01:27.993822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:30.351 [2024-11-18 12:01:27.993830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:30.351 [2024-11-18 12:01:27.993836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:30.351 [2024-11-18 12:01:27.993846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.351 [2024-11-18 12:01:27.993854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:30.351 [2024-11-18 12:01:27.993866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:18:30.351 [2024-11-18 12:01:27.993874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.351 [2024-11-18 12:01:28.007289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.351 [2024-11-18 12:01:28.007331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:30.351 [2024-11-18 12:01:28.007345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.353 ms 00:18:30.351 [2024-11-18 12:01:28.007353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.351 [2024-11-18 12:01:28.007794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.351 [2024-11-18 12:01:28.007814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:30.351 [2024-11-18 12:01:28.007829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:18:30.351 [2024-11-18 12:01:28.007836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.053753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.053801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:30.613 [2024-11-18 12:01:28.053816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.053826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.053897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.053907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:30.613 [2024-11-18 12:01:28.053921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.053929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.054009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.054021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:30.613 [2024-11-18 12:01:28.054032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.054040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.054064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.054071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:30.613 [2024-11-18 12:01:28.054082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.054090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.137968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.138020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:30.613 [2024-11-18 12:01:28.138036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.138045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.206534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.206598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:30.613 [2024-11-18 12:01:28.206614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.206626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.206731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.206743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.613 [2024-11-18 12:01:28.206756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.206766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.206818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.206829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.613 [2024-11-18 12:01:28.206840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.206848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.206949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.206959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.613 [2024-11-18 12:01:28.206970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.206978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.207019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.207029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:30.613 [2024-11-18 12:01:28.207040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.207048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.207097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.207106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.613 [2024-11-18 12:01:28.207116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.207124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.207174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.613 [2024-11-18 12:01:28.207185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.613 [2024-11-18 12:01:28.207196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.613 [2024-11-18 12:01:28.207203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.613 [2024-11-18 12:01:28.207350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.454 ms, result 0 00:18:30.613 true 00:18:30.613 12:01:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74435 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74435 ']' 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74435 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74435 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:30.613 killing process with pid 74435 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74435' 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74435 00:18:30.613 12:01:28 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74435 00:18:37.203 12:01:34 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:40.500 262144+0 records in 00:18:40.500 262144+0 records out 00:18:40.500 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.90528 s, 275 MB/s 00:18:40.500 12:01:37 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:43.049 12:01:40 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:43.049 [2024-11-18 12:01:40.257909] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:18:43.049 [2024-11-18 12:01:40.258038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74671 ] 00:18:43.049 [2024-11-18 12:01:40.418918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.049 [2024-11-18 12:01:40.535508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.311 [2024-11-18 12:01:40.821858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.311 [2024-11-18 12:01:40.821939] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.311 [2024-11-18 12:01:40.982806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.311 [2024-11-18 12:01:40.982870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.311 [2024-11-18 12:01:40.982891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:43.311 [2024-11-18 12:01:40.982900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.311 [2024-11-18 12:01:40.982954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.311 [2024-11-18 12:01:40.982964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.311 [2024-11-18 12:01:40.982976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:43.311 [2024-11-18 12:01:40.982984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.311 [2024-11-18 12:01:40.983004] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.311 [2024-11-18 12:01:40.983782] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.311 [2024-11-18 12:01:40.983811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.311 [2024-11-18 12:01:40.983820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.311 [2024-11-18 12:01:40.983830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:18:43.311 [2024-11-18 12:01:40.983838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.311 [2024-11-18 12:01:40.985434] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:43.311 [2024-11-18 12:01:40.999686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.311 [2024-11-18 12:01:40.999739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:43.311 [2024-11-18 12:01:40.999754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.253 ms 00:18:43.311 [2024-11-18 12:01:40.999762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.311 [2024-11-18 12:01:40.999839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.311 [2024-11-18 12:01:40.999849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:43.311 [2024-11-18 12:01:40.999859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:43.311 [2024-11-18 12:01:40.999866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.311 [2024-11-18 12:01:41.007907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.311 [2024-11-18 12:01:41.007952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.311 [2024-11-18 12:01:41.007963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.965 ms 00:18:43.311 [2024-11-18 12:01:41.007972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.311 [2024-11-18 12:01:41.008053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.311 [2024-11-18 12:01:41.008062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.311 [2024-11-18 12:01:41.008071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:43.311 [2024-11-18 12:01:41.008079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.311 [2024-11-18 12:01:41.008123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.311 [2024-11-18 12:01:41.008134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.311 [2024-11-18 12:01:41.008143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:43.311 [2024-11-18 12:01:41.008150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.311 [2024-11-18 12:01:41.008175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.574 [2024-11-18 12:01:41.012097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-18 12:01:41.012140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.574 [2024-11-18 12:01:41.012151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.928 ms 00:18:43.574 [2024-11-18 12:01:41.012162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-18 12:01:41.012196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-18 12:01:41.012205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.574 [2024-11-18 12:01:41.012215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:43.574 [2024-11-18 12:01:41.012223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-18 12:01:41.012273] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:43.574 [2024-11-18 12:01:41.012297] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:43.574 [2024-11-18 12:01:41.012335] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:43.574 [2024-11-18 12:01:41.012355] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:43.574 [2024-11-18 12:01:41.012464] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.574 [2024-11-18 12:01:41.012476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.574 [2024-11-18 12:01:41.012487] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:43.574 [2024-11-18 12:01:41.012498] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.574 [2024-11-18 12:01:41.012509] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.574 [2024-11-18 12:01:41.012517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:43.574 [2024-11-18 12:01:41.012525] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.574 [2024-11-18 12:01:41.012533] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.574 [2024-11-18 12:01:41.012543] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.574 [2024-11-18 12:01:41.012554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-18 12:01:41.012561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.574 [2024-11-18 12:01:41.012570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:43.574 [2024-11-18 12:01:41.012577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-18 12:01:41.012677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-18 12:01:41.012688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.574 [2024-11-18 12:01:41.012696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:43.574 [2024-11-18 12:01:41.012703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-18 12:01:41.012809] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.574 [2024-11-18 12:01:41.012839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.574 [2024-11-18 12:01:41.012849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.574 [2024-11-18 12:01:41.012857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.574 [2024-11-18 12:01:41.012865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.574 [2024-11-18 12:01:41.012872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.574 [2024-11-18 12:01:41.012881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:43.574 [2024-11-18 12:01:41.012890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.574 [2024-11-18 12:01:41.012899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:43.574 [2024-11-18 12:01:41.012906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.574 [2024-11-18 12:01:41.012914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.574 [2024-11-18 12:01:41.012929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:43.574 [2024-11-18 12:01:41.012936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.575 [2024-11-18 12:01:41.012944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.575 [2024-11-18 12:01:41.012951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:43.575 [2024-11-18 12:01:41.012965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.575 [2024-11-18 12:01:41.012973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.575 [2024-11-18 12:01:41.012981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:43.575 [2024-11-18 12:01:41.012988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.575 [2024-11-18 12:01:41.012996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.575 [2024-11-18 12:01:41.013003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:43.575 [2024-11-18 12:01:41.013010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.575 [2024-11-18 12:01:41.013017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.575 [2024-11-18 12:01:41.013024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:43.575 [2024-11-18 12:01:41.013032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.575 [2024-11-18 12:01:41.013038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.575 [2024-11-18 12:01:41.013045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:43.575 [2024-11-18 12:01:41.013052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.575 [2024-11-18 12:01:41.013059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.575 [2024-11-18 12:01:41.013066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:43.575 [2024-11-18 12:01:41.013072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.575 [2024-11-18 12:01:41.013079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.575 [2024-11-18 12:01:41.013086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:43.575 [2024-11-18 12:01:41.013093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.575 [2024-11-18 12:01:41.013099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.575 [2024-11-18 12:01:41.013106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:43.575 [2024-11-18 12:01:41.013114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.575 [2024-11-18 12:01:41.013121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.575 [2024-11-18 12:01:41.013127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:43.575 [2024-11-18 12:01:41.013134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.575 [2024-11-18 12:01:41.013140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.575 [2024-11-18 12:01:41.013147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:43.575 [2024-11-18 12:01:41.013153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.575 [2024-11-18 12:01:41.013162] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.575 [2024-11-18 12:01:41.013170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.575 [2024-11-18 12:01:41.013178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.575 [2024-11-18 12:01:41.013187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.575 [2024-11-18 12:01:41.013194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.575 [2024-11-18 12:01:41.013201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.575 [2024-11-18 12:01:41.013208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.575 [2024-11-18 12:01:41.013215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.575 [2024-11-18 12:01:41.013221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.575 [2024-11-18 12:01:41.013227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.575 [2024-11-18 12:01:41.013236] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.575 [2024-11-18 12:01:41.013246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.575 [2024-11-18 12:01:41.013254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:43.575 [2024-11-18 12:01:41.013262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:43.575 [2024-11-18 12:01:41.013270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:43.575 [2024-11-18 12:01:41.013277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:43.575 [2024-11-18 12:01:41.013284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:43.575 [2024-11-18 12:01:41.013292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:43.575 [2024-11-18 12:01:41.013298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:43.575 [2024-11-18 12:01:41.013306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:43.575 [2024-11-18 12:01:41.013312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:43.575 [2024-11-18 12:01:41.013319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:43.575 [2024-11-18 12:01:41.013327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:43.575 [2024-11-18 12:01:41.013335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:43.575 [2024-11-18 12:01:41.013342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:43.575 [2024-11-18 12:01:41.013349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:43.575 [2024-11-18 12:01:41.013356] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.575 [2024-11-18 12:01:41.013367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.575 [2024-11-18 12:01:41.013375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.575 [2024-11-18 12:01:41.013382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.575 [2024-11-18 12:01:41.013389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.575 [2024-11-18 12:01:41.013397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.575 [2024-11-18 12:01:41.013409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.575 [2024-11-18 12:01:41.013417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.575 [2024-11-18 12:01:41.013424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:18:43.575 [2024-11-18 12:01:41.013432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.575 [2024-11-18 12:01:41.044959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.575 [2024-11-18 12:01:41.045012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.575 [2024-11-18 12:01:41.045024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.483 ms 00:18:43.575 [2024-11-18 12:01:41.045032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.575 [2024-11-18 12:01:41.045126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.575 [2024-11-18 12:01:41.045135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.575 [2024-11-18 12:01:41.045144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:43.575 [2024-11-18 12:01:41.045152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.575 [2024-11-18 12:01:41.092981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.575 [2024-11-18 12:01:41.093039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.575 [2024-11-18 12:01:41.093053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.770 ms 00:18:43.575 [2024-11-18 12:01:41.093062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.575 [2024-11-18 12:01:41.093110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.575 [2024-11-18 12:01:41.093120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.575 [2024-11-18 12:01:41.093129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.575 [2024-11-18 12:01:41.093141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.575 [2024-11-18 12:01:41.093782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.575 [2024-11-18 12:01:41.093818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.575 [2024-11-18 12:01:41.093831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:18:43.575 [2024-11-18 12:01:41.093839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.575 [2024-11-18 12:01:41.093997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.094009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.576 [2024-11-18 12:01:41.094018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:18:43.576 [2024-11-18 12:01:41.094032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.109789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.109834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.576 [2024-11-18 12:01:41.109848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.736 ms 00:18:43.576 [2024-11-18 12:01:41.109856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.123826] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:43.576 [2024-11-18 12:01:41.123878] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:43.576 [2024-11-18 12:01:41.123892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.123901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:43.576 [2024-11-18 12:01:41.123910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.932 ms 00:18:43.576 [2024-11-18 12:01:41.123918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.149913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.149963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:43.576 [2024-11-18 12:01:41.149984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.942 ms 00:18:43.576 [2024-11-18 12:01:41.149992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.163009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.163067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:43.576 [2024-11-18 12:01:41.163079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.964 ms 00:18:43.576 [2024-11-18 12:01:41.163087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.175460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.175508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:43.576 [2024-11-18 12:01:41.175520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.329 ms 00:18:43.576 [2024-11-18 12:01:41.175528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.176194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.176230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.576 [2024-11-18 12:01:41.176241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:18:43.576 [2024-11-18 12:01:41.176249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.240686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.240752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:43.576 [2024-11-18 12:01:41.240769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.413 ms 00:18:43.576 [2024-11-18 12:01:41.240785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.251840] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:43.576 [2024-11-18 12:01:41.254642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.254693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.576 [2024-11-18 12:01:41.254705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.802 ms 00:18:43.576 [2024-11-18 12:01:41.254716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.254798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.254810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:43.576 [2024-11-18 12:01:41.254820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:43.576 [2024-11-18 12:01:41.254831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.254907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.254919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.576 [2024-11-18 12:01:41.254928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:43.576 [2024-11-18 12:01:41.254937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.254957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.254968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:43.576 [2024-11-18 12:01:41.254976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:43.576 [2024-11-18 12:01:41.254984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.576 [2024-11-18 12:01:41.255018] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:43.576 [2024-11-18 12:01:41.255030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.576 [2024-11-18 12:01:41.255041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:43.576 [2024-11-18 12:01:41.255050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:43.576 [2024-11-18 12:01:41.255059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.838 [2024-11-18 12:01:41.280411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.838 [2024-11-18 12:01:41.280463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:43.838 [2024-11-18 12:01:41.280477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.331 ms 00:18:43.838 [2024-11-18 12:01:41.280486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.838 [2024-11-18 12:01:41.280579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.838 [2024-11-18 12:01:41.280604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:43.838 [2024-11-18 12:01:41.280614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:43.838 [2024-11-18 12:01:41.280623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.838 [2024-11-18 12:01:41.281876] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.547 ms, result 0 00:18:44.778  [2024-11-18T12:01:43.411Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-18T12:01:44.350Z] Copying: 39/1024 [MB] (27 MBps) [2024-11-18T12:01:45.737Z] Copying: 53/1024 [MB] (13 MBps) [2024-11-18T12:01:46.310Z] Copying: 65/1024 [MB] (12 MBps) [2024-11-18T12:01:47.696Z] Copying: 80/1024 [MB] (15 MBps) [2024-11-18T12:01:48.641Z] Copying: 90/1024 [MB] (10 MBps) [2024-11-18T12:01:49.585Z] Copying: 103/1024 [MB] (13 MBps) [2024-11-18T12:01:50.531Z] Copying: 114/1024 [MB] (10 MBps) [2024-11-18T12:01:51.475Z] Copying: 128/1024 [MB] (13 MBps) [2024-11-18T12:01:52.422Z] Copying: 143/1024 [MB] (15 MBps) [2024-11-18T12:01:53.364Z] Copying: 154/1024 [MB] (11 MBps) [2024-11-18T12:01:54.309Z] Copying: 171/1024 [MB] (16 MBps) [2024-11-18T12:01:55.694Z] Copying: 185/1024 [MB] (14 MBps) [2024-11-18T12:01:56.635Z] Copying: 201/1024 [MB] (16 MBps) [2024-11-18T12:01:57.580Z] Copying: 222/1024 [MB] (20 MBps) [2024-11-18T12:01:58.578Z] Copying: 232/1024 [MB] (10 MBps) [2024-11-18T12:01:59.561Z] Copying: 242/1024 [MB] (10 MBps) [2024-11-18T12:02:00.506Z] Copying: 252/1024 [MB] (10 MBps) [2024-11-18T12:02:01.451Z] Copying: 268/1024 [MB] (16 MBps) [2024-11-18T12:02:02.396Z] Copying: 290/1024 [MB] (22 MBps) [2024-11-18T12:02:03.340Z] Copying: 308040/1048576 [kB] (10112 kBps) [2024-11-18T12:02:04.729Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-18T12:02:05.301Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-18T12:02:06.690Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-18T12:02:07.634Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-18T12:02:08.577Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-18T12:02:09.521Z] Copying: 371/1024 [MB] (15 MBps) [2024-11-18T12:02:10.467Z] Copying: 386/1024 [MB] (14 MBps) [2024-11-18T12:02:11.409Z] Copying: 401/1024 [MB] (15 MBps) [2024-11-18T12:02:12.355Z] Copying: 416/1024 [MB] (14 MBps) [2024-11-18T12:02:13.298Z] Copying: 426/1024 [MB] (10 MBps) [2024-11-18T12:02:14.683Z] Copying: 443/1024 [MB] (16 MBps) [2024-11-18T12:02:15.627Z] Copying: 461/1024 [MB] (18 MBps) [2024-11-18T12:02:16.571Z] Copying: 473/1024 [MB] (12 MBps) [2024-11-18T12:02:17.506Z] Copying: 490/1024 [MB] (16 MBps) [2024-11-18T12:02:18.446Z] Copying: 514/1024 [MB] (24 MBps) [2024-11-18T12:02:19.387Z] Copying: 549/1024 [MB] (34 MBps) [2024-11-18T12:02:20.329Z] Copying: 568/1024 [MB] (19 MBps) [2024-11-18T12:02:21.711Z] Copying: 585/1024 [MB] (17 MBps) [2024-11-18T12:02:22.645Z] Copying: 601/1024 [MB] (15 MBps) [2024-11-18T12:02:23.581Z] Copying: 638/1024 [MB] (37 MBps) [2024-11-18T12:02:24.527Z] Copying: 672/1024 [MB] (33 MBps) [2024-11-18T12:02:25.468Z] Copying: 698/1024 [MB] (26 MBps) [2024-11-18T12:02:26.437Z] Copying: 711/1024 [MB] (13 MBps) [2024-11-18T12:02:27.372Z] Copying: 741/1024 [MB] (30 MBps) [2024-11-18T12:02:28.317Z] Copying: 773/1024 [MB] (31 MBps) [2024-11-18T12:02:29.700Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-18T12:02:30.642Z] Copying: 799/1024 [MB] (15 MBps) [2024-11-18T12:02:31.580Z] Copying: 825/1024 [MB] (25 MBps) [2024-11-18T12:02:32.524Z] Copying: 859/1024 [MB] (33 MBps) [2024-11-18T12:02:33.467Z] Copying: 876/1024 [MB] (17 MBps) [2024-11-18T12:02:34.408Z] Copying: 894/1024 [MB] (17 MBps) [2024-11-18T12:02:35.349Z] Copying: 913/1024 [MB] (18 MBps) [2024-11-18T12:02:36.739Z] Copying: 929/1024 [MB] (16 MBps) [2024-11-18T12:02:37.312Z] Copying: 946/1024 [MB] (16 MBps) [2024-11-18T12:02:38.701Z] Copying: 961/1024 [MB] (15 MBps) [2024-11-18T12:02:39.639Z] Copying: 975/1024 [MB] (13 MBps) [2024-11-18T12:02:40.574Z] Copying: 988/1024 [MB] (13 MBps) [2024-11-18T12:02:40.574Z] Copying: 1016/1024 [MB] (28 MBps) [2024-11-18T12:02:40.574Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-18 12:02:40.449021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.449057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:42.873 [2024-11-18 12:02:40.449068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:42.873 [2024-11-18 12:02:40.449075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.449091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.873 [2024-11-18 12:02:40.451235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.451260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:42.873 [2024-11-18 12:02:40.451269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:19:42.873 [2024-11-18 12:02:40.451276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.452540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.452568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:42.873 [2024-11-18 12:02:40.452575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:19:42.873 [2024-11-18 12:02:40.452591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.463776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.463804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:42.873 [2024-11-18 12:02:40.463813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.173 ms 00:19:42.873 [2024-11-18 12:02:40.463819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.468596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.468627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:42.873 [2024-11-18 12:02:40.468635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.754 ms 00:19:42.873 [2024-11-18 12:02:40.468641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.486472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.486500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:42.873 [2024-11-18 12:02:40.486509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.793 ms 00:19:42.873 [2024-11-18 12:02:40.486514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.497455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.497482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:42.873 [2024-11-18 12:02:40.497491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.914 ms 00:19:42.873 [2024-11-18 12:02:40.497498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.497591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.497598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:42.873 [2024-11-18 12:02:40.497608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:42.873 [2024-11-18 12:02:40.497614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.515310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.515336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:42.873 [2024-11-18 12:02:40.515343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.685 ms 00:19:42.873 [2024-11-18 12:02:40.515349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.532505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.532531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:42.873 [2024-11-18 12:02:40.532544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.132 ms 00:19:42.873 [2024-11-18 12:02:40.532549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.549216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.549242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:42.873 [2024-11-18 12:02:40.549249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.643 ms 00:19:42.873 [2024-11-18 12:02:40.549254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.565854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.873 [2024-11-18 12:02:40.565879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:42.873 [2024-11-18 12:02:40.565886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.560 ms 00:19:42.873 [2024-11-18 12:02:40.565891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.873 [2024-11-18 12:02:40.565914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:42.873 [2024-11-18 12:02:40.565924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.565998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.566004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.566009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.566015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.566020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.566026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.566031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:42.873 [2024-11-18 12:02:40.566037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:42.874 [2024-11-18 12:02:40.566569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:42.875 [2024-11-18 12:02:40.566575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:42.875 [2024-11-18 12:02:40.566596] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:42.875 [2024-11-18 12:02:40.566606] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 460b762a-5884-46ae-bbdc-38ff9e82ccce 00:19:42.875 [2024-11-18 12:02:40.566612] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:42.875 [2024-11-18 12:02:40.566619] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:42.875 [2024-11-18 12:02:40.566624] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:42.875 [2024-11-18 12:02:40.566630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:42.875 [2024-11-18 12:02:40.566635] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:42.875 [2024-11-18 12:02:40.566641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:42.875 [2024-11-18 12:02:40.566646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:42.875 [2024-11-18 12:02:40.566656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:42.875 [2024-11-18 12:02:40.566660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:42.875 [2024-11-18 12:02:40.566665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.875 [2024-11-18 12:02:40.566671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:42.875 [2024-11-18 12:02:40.566677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:19:42.875 [2024-11-18 12:02:40.566682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.136 [2024-11-18 12:02:40.575852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.136 [2024-11-18 12:02:40.575876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:43.136 [2024-11-18 12:02:40.575883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.158 ms 00:19:43.136 [2024-11-18 12:02:40.575889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.136 [2024-11-18 12:02:40.576157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.136 [2024-11-18 12:02:40.576171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:43.136 [2024-11-18 12:02:40.576177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:43.136 [2024-11-18 12:02:40.576183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.136 [2024-11-18 12:02:40.601688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.136 [2024-11-18 12:02:40.601714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.136 [2024-11-18 12:02:40.601722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.136 [2024-11-18 12:02:40.601727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.136 [2024-11-18 12:02:40.601769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.136 [2024-11-18 12:02:40.601774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.136 [2024-11-18 12:02:40.601780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.136 [2024-11-18 12:02:40.601786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.136 [2024-11-18 12:02:40.601830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.136 [2024-11-18 12:02:40.601838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.136 [2024-11-18 12:02:40.601844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.136 [2024-11-18 12:02:40.601849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.136 [2024-11-18 12:02:40.601860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.136 [2024-11-18 12:02:40.601866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.136 [2024-11-18 12:02:40.601871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.136 [2024-11-18 12:02:40.601876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.136 [2024-11-18 12:02:40.659946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.136 [2024-11-18 12:02:40.659981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.136 [2024-11-18 12:02:40.659989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.136 [2024-11-18 12:02:40.659995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.136 [2024-11-18 12:02:40.707704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.136 [2024-11-18 12:02:40.707739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.136 [2024-11-18 12:02:40.707748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.137 [2024-11-18 12:02:40.707754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.137 [2024-11-18 12:02:40.707803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.137 [2024-11-18 12:02:40.707814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.137 [2024-11-18 12:02:40.707820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.137 [2024-11-18 12:02:40.707826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.137 [2024-11-18 12:02:40.707850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.137 [2024-11-18 12:02:40.707856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.137 [2024-11-18 12:02:40.707862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.137 [2024-11-18 12:02:40.707868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.137 [2024-11-18 12:02:40.707932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.137 [2024-11-18 12:02:40.707941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.137 [2024-11-18 12:02:40.707948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.137 [2024-11-18 12:02:40.707953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.137 [2024-11-18 12:02:40.707975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.137 [2024-11-18 12:02:40.707981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:43.137 [2024-11-18 12:02:40.707987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.137 [2024-11-18 12:02:40.707993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.137 [2024-11-18 12:02:40.708021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.137 [2024-11-18 12:02:40.708027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.137 [2024-11-18 12:02:40.708035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.137 [2024-11-18 12:02:40.708041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.137 [2024-11-18 12:02:40.708070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.137 [2024-11-18 12:02:40.708077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.137 [2024-11-18 12:02:40.708083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.137 [2024-11-18 12:02:40.708089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.137 [2024-11-18 12:02:40.708177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 259.132 ms, result 0 00:19:43.706 00:19:43.706 00:19:43.706 12:02:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:43.966 [2024-11-18 12:02:41.428763] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:19:43.966 [2024-11-18 12:02:41.428880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75304 ] 00:19:43.966 [2024-11-18 12:02:41.585242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.966 [2024-11-18 12:02:41.658935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.224 [2024-11-18 12:02:41.862399] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.225 [2024-11-18 12:02:41.862443] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.484 [2024-11-18 12:02:42.009225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.484 [2024-11-18 12:02:42.009255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:44.484 [2024-11-18 12:02:42.009268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:44.484 [2024-11-18 12:02:42.009273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.484 [2024-11-18 12:02:42.009305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.484 [2024-11-18 12:02:42.009313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.484 [2024-11-18 12:02:42.009321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:44.484 [2024-11-18 12:02:42.009326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.484 [2024-11-18 12:02:42.009339] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:44.485 [2024-11-18 12:02:42.009864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:44.485 [2024-11-18 12:02:42.009876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.009882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.485 [2024-11-18 12:02:42.009889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:19:44.485 [2024-11-18 12:02:42.009895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.010893] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:44.485 [2024-11-18 12:02:42.020450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.020473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:44.485 [2024-11-18 12:02:42.020481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.558 ms 00:19:44.485 [2024-11-18 12:02:42.020487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.020529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.020536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:44.485 [2024-11-18 12:02:42.020543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:44.485 [2024-11-18 12:02:42.020548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.024832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.024851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.485 [2024-11-18 12:02:42.024858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.238 ms 00:19:44.485 [2024-11-18 12:02:42.024864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.024918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.024925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.485 [2024-11-18 12:02:42.024931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:44.485 [2024-11-18 12:02:42.024936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.024973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.024980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:44.485 [2024-11-18 12:02:42.024986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.485 [2024-11-18 12:02:42.024991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.025004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.485 [2024-11-18 12:02:42.027652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.027671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.485 [2024-11-18 12:02:42.027678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.652 ms 00:19:44.485 [2024-11-18 12:02:42.027686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.027710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.027716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:44.485 [2024-11-18 12:02:42.027722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:44.485 [2024-11-18 12:02:42.027728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.027740] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:44.485 [2024-11-18 12:02:42.027753] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:44.485 [2024-11-18 12:02:42.027779] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:44.485 [2024-11-18 12:02:42.027792] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:44.485 [2024-11-18 12:02:42.027868] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:44.485 [2024-11-18 12:02:42.027876] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:44.485 [2024-11-18 12:02:42.027883] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:44.485 [2024-11-18 12:02:42.027891] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:44.485 [2024-11-18 12:02:42.027897] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:44.485 [2024-11-18 12:02:42.027903] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:44.485 [2024-11-18 12:02:42.027909] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:44.485 [2024-11-18 12:02:42.027914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:44.485 [2024-11-18 12:02:42.027920] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:44.485 [2024-11-18 12:02:42.027927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.027933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:44.485 [2024-11-18 12:02:42.027938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:19:44.485 [2024-11-18 12:02:42.027943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.028006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.485 [2024-11-18 12:02:42.028012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:44.485 [2024-11-18 12:02:42.028018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:44.485 [2024-11-18 12:02:42.028023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.485 [2024-11-18 12:02:42.028097] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:44.485 [2024-11-18 12:02:42.028106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:44.485 [2024-11-18 12:02:42.028112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.485 [2024-11-18 12:02:42.028118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:44.485 [2024-11-18 12:02:42.028130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:44.485 [2024-11-18 12:02:42.028140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:44.485 [2024-11-18 12:02:42.028146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.485 [2024-11-18 12:02:42.028156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:44.485 [2024-11-18 12:02:42.028161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:44.485 [2024-11-18 12:02:42.028166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.485 [2024-11-18 12:02:42.028172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:44.485 [2024-11-18 12:02:42.028178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:44.485 [2024-11-18 12:02:42.028187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:44.485 [2024-11-18 12:02:42.028197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:44.485 [2024-11-18 12:02:42.028202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:44.485 [2024-11-18 12:02:42.028212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.485 [2024-11-18 12:02:42.028221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:44.485 [2024-11-18 12:02:42.028227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.485 [2024-11-18 12:02:42.028236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:44.485 [2024-11-18 12:02:42.028241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.485 [2024-11-18 12:02:42.028251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:44.485 [2024-11-18 12:02:42.028256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.485 [2024-11-18 12:02:42.028265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:44.485 [2024-11-18 12:02:42.028270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:44.485 [2024-11-18 12:02:42.028275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.485 [2024-11-18 12:02:42.028280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:44.485 [2024-11-18 12:02:42.028284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:44.485 [2024-11-18 12:02:42.028289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.486 [2024-11-18 12:02:42.028294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:44.486 [2024-11-18 12:02:42.028299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:44.486 [2024-11-18 12:02:42.028304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.486 [2024-11-18 12:02:42.028309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:44.486 [2024-11-18 12:02:42.028313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:44.486 [2024-11-18 12:02:42.028318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.486 [2024-11-18 12:02:42.028323] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:44.486 [2024-11-18 12:02:42.028329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:44.486 [2024-11-18 12:02:42.028335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.486 [2024-11-18 12:02:42.028341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.486 [2024-11-18 12:02:42.028347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:44.486 [2024-11-18 12:02:42.028352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:44.486 [2024-11-18 12:02:42.028357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:44.486 [2024-11-18 12:02:42.028362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:44.486 [2024-11-18 12:02:42.028367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:44.486 [2024-11-18 12:02:42.028371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:44.486 [2024-11-18 12:02:42.028378] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:44.486 [2024-11-18 12:02:42.028384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.486 [2024-11-18 12:02:42.028391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:44.486 [2024-11-18 12:02:42.028397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:44.486 [2024-11-18 12:02:42.028402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:44.486 [2024-11-18 12:02:42.028407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:44.486 [2024-11-18 12:02:42.028412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:44.486 [2024-11-18 12:02:42.028417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:44.486 [2024-11-18 12:02:42.028423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:44.486 [2024-11-18 12:02:42.028428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:44.486 [2024-11-18 12:02:42.028433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:44.486 [2024-11-18 12:02:42.028438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:44.486 [2024-11-18 12:02:42.028445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:44.486 [2024-11-18 12:02:42.028450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:44.486 [2024-11-18 12:02:42.028455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:44.486 [2024-11-18 12:02:42.028460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:44.486 [2024-11-18 12:02:42.028465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:44.486 [2024-11-18 12:02:42.028473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.486 [2024-11-18 12:02:42.028480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:44.486 [2024-11-18 12:02:42.028485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:44.486 [2024-11-18 12:02:42.028491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:44.486 [2024-11-18 12:02:42.028497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:44.486 [2024-11-18 12:02:42.028502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.028508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:44.486 [2024-11-18 12:02:42.028514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:19:44.486 [2024-11-18 12:02:42.028519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.049036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.049059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.486 [2024-11-18 12:02:42.049066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.486 ms 00:19:44.486 [2024-11-18 12:02:42.049072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.049137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.049143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:44.486 [2024-11-18 12:02:42.049149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:44.486 [2024-11-18 12:02:42.049154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.084800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.084827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:44.486 [2024-11-18 12:02:42.084836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.611 ms 00:19:44.486 [2024-11-18 12:02:42.084842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.084866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.084872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.486 [2024-11-18 12:02:42.084879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:44.486 [2024-11-18 12:02:42.084887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.085202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.085221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.486 [2024-11-18 12:02:42.085228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:44.486 [2024-11-18 12:02:42.085234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.085332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.085339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.486 [2024-11-18 12:02:42.085345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:44.486 [2024-11-18 12:02:42.085351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.095645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.095666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.486 [2024-11-18 12:02:42.095674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.275 ms 00:19:44.486 [2024-11-18 12:02:42.095682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.105231] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:44.486 [2024-11-18 12:02:42.105254] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:44.486 [2024-11-18 12:02:42.105263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.105269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:44.486 [2024-11-18 12:02:42.105276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.514 ms 00:19:44.486 [2024-11-18 12:02:42.105281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.123646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.123672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:44.486 [2024-11-18 12:02:42.123680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.336 ms 00:19:44.486 [2024-11-18 12:02:42.123687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.132442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.132463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:44.486 [2024-11-18 12:02:42.132469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.727 ms 00:19:44.486 [2024-11-18 12:02:42.132475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.486 [2024-11-18 12:02:42.140766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.486 [2024-11-18 12:02:42.140787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:44.486 [2024-11-18 12:02:42.140795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.266 ms 00:19:44.486 [2024-11-18 12:02:42.140800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.487 [2024-11-18 12:02:42.141242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.487 [2024-11-18 12:02:42.141259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:44.487 [2024-11-18 12:02:42.141266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:19:44.487 [2024-11-18 12:02:42.141273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.184095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.748 [2024-11-18 12:02:42.184127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:44.748 [2024-11-18 12:02:42.184140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.808 ms 00:19:44.748 [2024-11-18 12:02:42.184147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.191887] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:44.748 [2024-11-18 12:02:42.193505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.748 [2024-11-18 12:02:42.193526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:44.748 [2024-11-18 12:02:42.193533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.326 ms 00:19:44.748 [2024-11-18 12:02:42.193539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.193600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.748 [2024-11-18 12:02:42.193609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:44.748 [2024-11-18 12:02:42.193616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:44.748 [2024-11-18 12:02:42.193623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.193666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.748 [2024-11-18 12:02:42.193674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:44.748 [2024-11-18 12:02:42.193680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:44.748 [2024-11-18 12:02:42.193686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.193699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.748 [2024-11-18 12:02:42.193705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:44.748 [2024-11-18 12:02:42.193711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.748 [2024-11-18 12:02:42.193716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.193740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:44.748 [2024-11-18 12:02:42.193748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.748 [2024-11-18 12:02:42.193754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:44.748 [2024-11-18 12:02:42.193760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:44.748 [2024-11-18 12:02:42.193765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.212124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.748 [2024-11-18 12:02:42.212148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:44.748 [2024-11-18 12:02:42.212156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.346 ms 00:19:44.748 [2024-11-18 12:02:42.212165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.212219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.748 [2024-11-18 12:02:42.212226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:44.748 [2024-11-18 12:02:42.212232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:44.748 [2024-11-18 12:02:42.212238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.748 [2024-11-18 12:02:42.212931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 203.375 ms, result 0 00:19:45.694  [2024-11-18T12:02:44.783Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-18T12:02:45.352Z] Copying: 35/1024 [MB] (11 MBps) [2024-11-18T12:02:46.736Z] Copying: 45/1024 [MB] (10 MBps) [2024-11-18T12:02:47.676Z] Copying: 56/1024 [MB] (11 MBps) [2024-11-18T12:02:48.616Z] Copying: 74/1024 [MB] (17 MBps) [2024-11-18T12:02:49.558Z] Copying: 93/1024 [MB] (19 MBps) [2024-11-18T12:02:50.500Z] Copying: 108/1024 [MB] (15 MBps) [2024-11-18T12:02:51.445Z] Copying: 119/1024 [MB] (10 MBps) [2024-11-18T12:02:52.389Z] Copying: 136/1024 [MB] (17 MBps) [2024-11-18T12:02:53.777Z] Copying: 147/1024 [MB] (10 MBps) [2024-11-18T12:02:54.779Z] Copying: 166/1024 [MB] (18 MBps) [2024-11-18T12:02:55.352Z] Copying: 182/1024 [MB] (16 MBps) [2024-11-18T12:02:56.735Z] Copying: 196/1024 [MB] (14 MBps) [2024-11-18T12:02:57.678Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-18T12:02:58.620Z] Copying: 228/1024 [MB] (19 MBps) [2024-11-18T12:02:59.561Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-18T12:03:00.503Z] Copying: 251/1024 [MB] (11 MBps) [2024-11-18T12:03:01.446Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-18T12:03:02.392Z] Copying: 276/1024 [MB] (14 MBps) [2024-11-18T12:03:03.780Z] Copying: 290/1024 [MB] (14 MBps) [2024-11-18T12:03:04.352Z] Copying: 304/1024 [MB] (13 MBps) [2024-11-18T12:03:05.738Z] Copying: 322/1024 [MB] (18 MBps) [2024-11-18T12:03:06.681Z] Copying: 339/1024 [MB] (16 MBps) [2024-11-18T12:03:07.622Z] Copying: 359/1024 [MB] (19 MBps) [2024-11-18T12:03:08.568Z] Copying: 372/1024 [MB] (13 MBps) [2024-11-18T12:03:09.512Z] Copying: 396/1024 [MB] (23 MBps) [2024-11-18T12:03:10.453Z] Copying: 410/1024 [MB] (14 MBps) [2024-11-18T12:03:11.395Z] Copying: 436/1024 [MB] (25 MBps) [2024-11-18T12:03:12.781Z] Copying: 454/1024 [MB] (18 MBps) [2024-11-18T12:03:13.722Z] Copying: 468/1024 [MB] (14 MBps) [2024-11-18T12:03:14.666Z] Copying: 488/1024 [MB] (19 MBps) [2024-11-18T12:03:15.609Z] Copying: 502/1024 [MB] (14 MBps) [2024-11-18T12:03:16.555Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-18T12:03:17.499Z] Copying: 531/1024 [MB] (16 MBps) [2024-11-18T12:03:18.443Z] Copying: 542/1024 [MB] (11 MBps) [2024-11-18T12:03:19.387Z] Copying: 556/1024 [MB] (14 MBps) [2024-11-18T12:03:20.769Z] Copying: 571/1024 [MB] (14 MBps) [2024-11-18T12:03:21.714Z] Copying: 593/1024 [MB] (21 MBps) [2024-11-18T12:03:22.658Z] Copying: 617/1024 [MB] (24 MBps) [2024-11-18T12:03:23.667Z] Copying: 627/1024 [MB] (10 MBps) [2024-11-18T12:03:24.630Z] Copying: 639/1024 [MB] (12 MBps) [2024-11-18T12:03:25.572Z] Copying: 661/1024 [MB] (22 MBps) [2024-11-18T12:03:26.516Z] Copying: 675/1024 [MB] (13 MBps) [2024-11-18T12:03:27.458Z] Copying: 689/1024 [MB] (13 MBps) [2024-11-18T12:03:28.397Z] Copying: 704/1024 [MB] (14 MBps) [2024-11-18T12:03:29.781Z] Copying: 728/1024 [MB] (24 MBps) [2024-11-18T12:03:30.352Z] Copying: 747/1024 [MB] (19 MBps) [2024-11-18T12:03:31.733Z] Copying: 768/1024 [MB] (20 MBps) [2024-11-18T12:03:32.673Z] Copying: 796/1024 [MB] (28 MBps) [2024-11-18T12:03:33.613Z] Copying: 821/1024 [MB] (25 MBps) [2024-11-18T12:03:34.557Z] Copying: 845/1024 [MB] (24 MBps) [2024-11-18T12:03:35.499Z] Copying: 861/1024 [MB] (16 MBps) [2024-11-18T12:03:36.441Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-18T12:03:37.382Z] Copying: 892/1024 [MB] (18 MBps) [2024-11-18T12:03:38.765Z] Copying: 905/1024 [MB] (12 MBps) [2024-11-18T12:03:39.709Z] Copying: 929/1024 [MB] (24 MBps) [2024-11-18T12:03:40.655Z] Copying: 941/1024 [MB] (12 MBps) [2024-11-18T12:03:41.598Z] Copying: 952/1024 [MB] (10 MBps) [2024-11-18T12:03:42.545Z] Copying: 964/1024 [MB] (12 MBps) [2024-11-18T12:03:43.492Z] Copying: 977/1024 [MB] (13 MBps) [2024-11-18T12:03:44.436Z] Copying: 988/1024 [MB] (10 MBps) [2024-11-18T12:03:45.381Z] Copying: 1004/1024 [MB] (16 MBps) [2024-11-18T12:03:45.381Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-18 12:03:45.230293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.230365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:47.680 [2024-11-18 12:03:45.230382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:47.680 [2024-11-18 12:03:45.230391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.230413] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:47.680 [2024-11-18 12:03:45.233490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.233552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:47.680 [2024-11-18 12:03:45.233573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:20:47.680 [2024-11-18 12:03:45.233592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.233811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.233827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:47.680 [2024-11-18 12:03:45.233837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:47.680 [2024-11-18 12:03:45.233845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.237292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.237315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:47.680 [2024-11-18 12:03:45.237325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:20:47.680 [2024-11-18 12:03:45.237334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.244697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.244744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:47.680 [2024-11-18 12:03:45.244756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.342 ms 00:20:47.680 [2024-11-18 12:03:45.244764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.272075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.272125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:47.680 [2024-11-18 12:03:45.272138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.244 ms 00:20:47.680 [2024-11-18 12:03:45.272147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.287868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.287916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:47.680 [2024-11-18 12:03:45.287929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.673 ms 00:20:47.680 [2024-11-18 12:03:45.287937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.288081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.288100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:47.680 [2024-11-18 12:03:45.288109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:47.680 [2024-11-18 12:03:45.288118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.313643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.313688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:47.680 [2024-11-18 12:03:45.313699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.509 ms 00:20:47.680 [2024-11-18 12:03:45.313706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.338896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.338964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:47.680 [2024-11-18 12:03:45.338976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.144 ms 00:20:47.680 [2024-11-18 12:03:45.338983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.680 [2024-11-18 12:03:45.363552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.680 [2024-11-18 12:03:45.363612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:47.680 [2024-11-18 12:03:45.363624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.523 ms 00:20:47.680 [2024-11-18 12:03:45.363631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.943 [2024-11-18 12:03:45.388504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.943 [2024-11-18 12:03:45.388555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:47.943 [2024-11-18 12:03:45.388567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.780 ms 00:20:47.943 [2024-11-18 12:03:45.388574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.943 [2024-11-18 12:03:45.388627] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:47.943 [2024-11-18 12:03:45.388644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.388997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:47.943 [2024-11-18 12:03:45.389128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:47.944 [2024-11-18 12:03:45.389444] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:47.944 [2024-11-18 12:03:45.389456] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 460b762a-5884-46ae-bbdc-38ff9e82ccce 00:20:47.944 [2024-11-18 12:03:45.389464] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:47.944 [2024-11-18 12:03:45.389471] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:47.944 [2024-11-18 12:03:45.389479] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:47.944 [2024-11-18 12:03:45.389487] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:47.944 [2024-11-18 12:03:45.389494] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:47.944 [2024-11-18 12:03:45.389503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:47.944 [2024-11-18 12:03:45.389517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:47.944 [2024-11-18 12:03:45.389524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:47.944 [2024-11-18 12:03:45.389530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:47.944 [2024-11-18 12:03:45.389538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.944 [2024-11-18 12:03:45.389545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:47.944 [2024-11-18 12:03:45.389554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:20:47.944 [2024-11-18 12:03:45.389562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.403184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.944 [2024-11-18 12:03:45.403225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:47.944 [2024-11-18 12:03:45.403236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.589 ms 00:20:47.944 [2024-11-18 12:03:45.403245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.403695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.944 [2024-11-18 12:03:45.403718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:47.944 [2024-11-18 12:03:45.403728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:20:47.944 [2024-11-18 12:03:45.403743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.440570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.944 [2024-11-18 12:03:45.440624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.944 [2024-11-18 12:03:45.440636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.944 [2024-11-18 12:03:45.440646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.440711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.944 [2024-11-18 12:03:45.440722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.944 [2024-11-18 12:03:45.440732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.944 [2024-11-18 12:03:45.440745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.440817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.944 [2024-11-18 12:03:45.440828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.944 [2024-11-18 12:03:45.440838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.944 [2024-11-18 12:03:45.440846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.440862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.944 [2024-11-18 12:03:45.440871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.944 [2024-11-18 12:03:45.440880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.944 [2024-11-18 12:03:45.440889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.526032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.944 [2024-11-18 12:03:45.526088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.944 [2024-11-18 12:03:45.526101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.944 [2024-11-18 12:03:45.526110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.595656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.944 [2024-11-18 12:03:45.595709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.944 [2024-11-18 12:03:45.595722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.944 [2024-11-18 12:03:45.595731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.944 [2024-11-18 12:03:45.595815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.944 [2024-11-18 12:03:45.595825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.944 [2024-11-18 12:03:45.595835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.944 [2024-11-18 12:03:45.595843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.945 [2024-11-18 12:03:45.595884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.945 [2024-11-18 12:03:45.595894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.945 [2024-11-18 12:03:45.595903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.945 [2024-11-18 12:03:45.595911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.945 [2024-11-18 12:03:45.596011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.945 [2024-11-18 12:03:45.596021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.945 [2024-11-18 12:03:45.596030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.945 [2024-11-18 12:03:45.596038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.945 [2024-11-18 12:03:45.596068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.945 [2024-11-18 12:03:45.596079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:47.945 [2024-11-18 12:03:45.596089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.945 [2024-11-18 12:03:45.596097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.945 [2024-11-18 12:03:45.596138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.945 [2024-11-18 12:03:45.596151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.945 [2024-11-18 12:03:45.596159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.945 [2024-11-18 12:03:45.596167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.945 [2024-11-18 12:03:45.596213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.945 [2024-11-18 12:03:45.596223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.945 [2024-11-18 12:03:45.596232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.945 [2024-11-18 12:03:45.596241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.945 [2024-11-18 12:03:45.596371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.042 ms, result 0 00:20:48.888 00:20:48.888 00:20:48.888 12:03:46 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:50.803 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:50.803 12:03:48 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:51.065 [2024-11-18 12:03:48.527766] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:20:51.065 [2024-11-18 12:03:48.527881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75988 ] 00:20:51.065 [2024-11-18 12:03:48.685907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.325 [2024-11-18 12:03:48.806480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.587 [2024-11-18 12:03:49.093804] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.587 [2024-11-18 12:03:49.093889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.587 [2024-11-18 12:03:49.252783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.252830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:51.587 [2024-11-18 12:03:49.252847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.587 [2024-11-18 12:03:49.252855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.252901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.252911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.587 [2024-11-18 12:03:49.252925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:51.587 [2024-11-18 12:03:49.252932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.252952] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:51.587 [2024-11-18 12:03:49.253691] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:51.587 [2024-11-18 12:03:49.253715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.253722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.587 [2024-11-18 12:03:49.253731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:20:51.587 [2024-11-18 12:03:49.253738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.254928] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:51.587 [2024-11-18 12:03:49.267640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.267676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:51.587 [2024-11-18 12:03:49.267694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.714 ms 00:20:51.587 [2024-11-18 12:03:49.267705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.267771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.267780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:51.587 [2024-11-18 12:03:49.267788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:51.587 [2024-11-18 12:03:49.267795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.272921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.272951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.587 [2024-11-18 12:03:49.272960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.078 ms 00:20:51.587 [2024-11-18 12:03:49.272967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.273035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.273044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.587 [2024-11-18 12:03:49.273052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:51.587 [2024-11-18 12:03:49.273059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.273106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.273116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:51.587 [2024-11-18 12:03:49.273124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:51.587 [2024-11-18 12:03:49.273131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.273151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:51.587 [2024-11-18 12:03:49.276404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.276431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.587 [2024-11-18 12:03:49.276440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:20:51.587 [2024-11-18 12:03:49.276449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.276476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.276484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:51.587 [2024-11-18 12:03:49.276492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:51.587 [2024-11-18 12:03:49.276499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.276517] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:51.587 [2024-11-18 12:03:49.276535] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:51.587 [2024-11-18 12:03:49.276569] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:51.587 [2024-11-18 12:03:49.276596] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:51.587 [2024-11-18 12:03:49.276699] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:51.587 [2024-11-18 12:03:49.276709] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:51.587 [2024-11-18 12:03:49.276719] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:51.587 [2024-11-18 12:03:49.276730] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:51.587 [2024-11-18 12:03:49.276738] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:51.587 [2024-11-18 12:03:49.276746] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:51.587 [2024-11-18 12:03:49.276753] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:51.587 [2024-11-18 12:03:49.276760] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:51.587 [2024-11-18 12:03:49.276767] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:51.587 [2024-11-18 12:03:49.276776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.587 [2024-11-18 12:03:49.276784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:51.587 [2024-11-18 12:03:49.276791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:51.587 [2024-11-18 12:03:49.276798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.587 [2024-11-18 12:03:49.276880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.588 [2024-11-18 12:03:49.276888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:51.588 [2024-11-18 12:03:49.276895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:51.588 [2024-11-18 12:03:49.276902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.588 [2024-11-18 12:03:49.277001] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:51.588 [2024-11-18 12:03:49.277013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:51.588 [2024-11-18 12:03:49.277021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:51.588 [2024-11-18 12:03:49.277042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:51.588 [2024-11-18 12:03:49.277062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.588 [2024-11-18 12:03:49.277077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:51.588 [2024-11-18 12:03:49.277083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:51.588 [2024-11-18 12:03:49.277090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.588 [2024-11-18 12:03:49.277096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:51.588 [2024-11-18 12:03:49.277103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:51.588 [2024-11-18 12:03:49.277115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:51.588 [2024-11-18 12:03:49.277128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:51.588 [2024-11-18 12:03:49.277147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:51.588 [2024-11-18 12:03:49.277167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:51.588 [2024-11-18 12:03:49.277186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:51.588 [2024-11-18 12:03:49.277205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:51.588 [2024-11-18 12:03:49.277224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.588 [2024-11-18 12:03:49.277237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:51.588 [2024-11-18 12:03:49.277244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:51.588 [2024-11-18 12:03:49.277250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.588 [2024-11-18 12:03:49.277257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:51.588 [2024-11-18 12:03:49.277264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:51.588 [2024-11-18 12:03:49.277270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:51.588 [2024-11-18 12:03:49.277283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:51.588 [2024-11-18 12:03:49.277289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277295] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:51.588 [2024-11-18 12:03:49.277303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:51.588 [2024-11-18 12:03:49.277310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.588 [2024-11-18 12:03:49.277325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:51.588 [2024-11-18 12:03:49.277332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:51.588 [2024-11-18 12:03:49.277338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:51.588 [2024-11-18 12:03:49.277345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:51.588 [2024-11-18 12:03:49.277352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:51.588 [2024-11-18 12:03:49.277358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:51.588 [2024-11-18 12:03:49.277366] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:51.588 [2024-11-18 12:03:49.277374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.588 [2024-11-18 12:03:49.277383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:51.588 [2024-11-18 12:03:49.277390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:51.588 [2024-11-18 12:03:49.277397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:51.588 [2024-11-18 12:03:49.277404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:51.588 [2024-11-18 12:03:49.277410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:51.588 [2024-11-18 12:03:49.277417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:51.588 [2024-11-18 12:03:49.277424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:51.588 [2024-11-18 12:03:49.277431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:51.588 [2024-11-18 12:03:49.277438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:51.588 [2024-11-18 12:03:49.277445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:51.588 [2024-11-18 12:03:49.277451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:51.588 [2024-11-18 12:03:49.277458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:51.588 [2024-11-18 12:03:49.277465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:51.588 [2024-11-18 12:03:49.277472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:51.588 [2024-11-18 12:03:49.277479] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:51.588 [2024-11-18 12:03:49.277489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.588 [2024-11-18 12:03:49.277497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:51.588 [2024-11-18 12:03:49.277504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:51.588 [2024-11-18 12:03:49.277511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:51.588 [2024-11-18 12:03:49.277518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:51.588 [2024-11-18 12:03:49.277526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.588 [2024-11-18 12:03:49.277534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:51.588 [2024-11-18 12:03:49.277540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:20:51.588 [2024-11-18 12:03:49.277547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.303952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.303985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.850 [2024-11-18 12:03:49.303996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.345 ms 00:20:51.850 [2024-11-18 12:03:49.304003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.304086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.304094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:51.850 [2024-11-18 12:03:49.304102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:51.850 [2024-11-18 12:03:49.304110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.349131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.349172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.850 [2024-11-18 12:03:49.349184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.972 ms 00:20:51.850 [2024-11-18 12:03:49.349193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.349232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.349241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.850 [2024-11-18 12:03:49.349250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:51.850 [2024-11-18 12:03:49.349260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.349696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.349722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.850 [2024-11-18 12:03:49.349731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:20:51.850 [2024-11-18 12:03:49.349738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.349864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.349873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.850 [2024-11-18 12:03:49.349882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:51.850 [2024-11-18 12:03:49.349893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.363612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.363652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.850 [2024-11-18 12:03:49.363665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.698 ms 00:20:51.850 [2024-11-18 12:03:49.363673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.376933] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:51.850 [2024-11-18 12:03:49.376970] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:51.850 [2024-11-18 12:03:49.376982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.376989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:51.850 [2024-11-18 12:03:49.376998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.220 ms 00:20:51.850 [2024-11-18 12:03:49.377005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.402107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.402170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:51.850 [2024-11-18 12:03:49.402181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.059 ms 00:20:51.850 [2024-11-18 12:03:49.402189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.414383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.414424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:51.850 [2024-11-18 12:03:49.414436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.148 ms 00:20:51.850 [2024-11-18 12:03:49.414443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.426879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.426919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:51.850 [2024-11-18 12:03:49.426931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.395 ms 00:20:51.850 [2024-11-18 12:03:49.426939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.427613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.427649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:51.850 [2024-11-18 12:03:49.427659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:20:51.850 [2024-11-18 12:03:49.427670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.488951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.489000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:51.850 [2024-11-18 12:03:49.489018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.261 ms 00:20:51.850 [2024-11-18 12:03:49.489026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.499350] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:51.850 [2024-11-18 12:03:49.501522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.501552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:51.850 [2024-11-18 12:03:49.501564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.456 ms 00:20:51.850 [2024-11-18 12:03:49.501573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.850 [2024-11-18 12:03:49.501649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.850 [2024-11-18 12:03:49.501661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:51.851 [2024-11-18 12:03:49.501671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:51.851 [2024-11-18 12:03:49.501682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.851 [2024-11-18 12:03:49.501745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.851 [2024-11-18 12:03:49.501756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:51.851 [2024-11-18 12:03:49.501766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:51.851 [2024-11-18 12:03:49.501773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.851 [2024-11-18 12:03:49.501790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.851 [2024-11-18 12:03:49.501798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:51.851 [2024-11-18 12:03:49.501806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:51.851 [2024-11-18 12:03:49.501813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.851 [2024-11-18 12:03:49.501841] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:51.851 [2024-11-18 12:03:49.501852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.851 [2024-11-18 12:03:49.501859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:51.851 [2024-11-18 12:03:49.501867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:51.851 [2024-11-18 12:03:49.501874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.851 [2024-11-18 12:03:49.525491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.851 [2024-11-18 12:03:49.525525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:51.851 [2024-11-18 12:03:49.525537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.601 ms 00:20:51.851 [2024-11-18 12:03:49.525549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.851 [2024-11-18 12:03:49.525631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.851 [2024-11-18 12:03:49.525641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:51.851 [2024-11-18 12:03:49.525650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:51.851 [2024-11-18 12:03:49.525657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.851 [2024-11-18 12:03:49.527274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.068 ms, result 0 00:20:53.238  [2024-11-18T12:03:51.884Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-18T12:03:52.910Z] Copying: 31/1024 [MB] (11 MBps) [2024-11-18T12:03:53.856Z] Copying: 41/1024 [MB] (10 MBps) [2024-11-18T12:03:54.801Z] Copying: 56/1024 [MB] (14 MBps) [2024-11-18T12:03:55.745Z] Copying: 70/1024 [MB] (14 MBps) [2024-11-18T12:03:56.689Z] Copying: 81/1024 [MB] (10 MBps) [2024-11-18T12:03:57.634Z] Copying: 93/1024 [MB] (12 MBps) [2024-11-18T12:03:58.578Z] Copying: 104/1024 [MB] (11 MBps) [2024-11-18T12:03:59.966Z] Copying: 122/1024 [MB] (17 MBps) [2024-11-18T12:04:00.911Z] Copying: 132/1024 [MB] (10 MBps) [2024-11-18T12:04:01.855Z] Copying: 143/1024 [MB] (10 MBps) [2024-11-18T12:04:02.792Z] Copying: 155/1024 [MB] (12 MBps) [2024-11-18T12:04:03.732Z] Copying: 181/1024 [MB] (25 MBps) [2024-11-18T12:04:04.677Z] Copying: 225/1024 [MB] (43 MBps) [2024-11-18T12:04:05.621Z] Copying: 241/1024 [MB] (16 MBps) [2024-11-18T12:04:06.566Z] Copying: 262/1024 [MB] (20 MBps) [2024-11-18T12:04:07.965Z] Copying: 278/1024 [MB] (15 MBps) [2024-11-18T12:04:08.910Z] Copying: 293/1024 [MB] (15 MBps) [2024-11-18T12:04:09.853Z] Copying: 306/1024 [MB] (13 MBps) [2024-11-18T12:04:10.798Z] Copying: 317/1024 [MB] (11 MBps) [2024-11-18T12:04:11.744Z] Copying: 331/1024 [MB] (13 MBps) [2024-11-18T12:04:12.690Z] Copying: 343/1024 [MB] (12 MBps) [2024-11-18T12:04:13.635Z] Copying: 360/1024 [MB] (17 MBps) [2024-11-18T12:04:14.575Z] Copying: 372/1024 [MB] (11 MBps) [2024-11-18T12:04:15.962Z] Copying: 386/1024 [MB] (13 MBps) [2024-11-18T12:04:16.906Z] Copying: 403/1024 [MB] (17 MBps) [2024-11-18T12:04:17.852Z] Copying: 424/1024 [MB] (20 MBps) [2024-11-18T12:04:18.794Z] Copying: 443/1024 [MB] (18 MBps) [2024-11-18T12:04:19.736Z] Copying: 453/1024 [MB] (10 MBps) [2024-11-18T12:04:20.681Z] Copying: 464/1024 [MB] (10 MBps) [2024-11-18T12:04:21.693Z] Copying: 477/1024 [MB] (13 MBps) [2024-11-18T12:04:22.635Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-18T12:04:23.577Z] Copying: 498/1024 [MB] (10 MBps) [2024-11-18T12:04:24.960Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-18T12:04:25.562Z] Copying: 518/1024 [MB] (10 MBps) [2024-11-18T12:04:26.951Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-18T12:04:27.892Z] Copying: 551304/1048576 [kB] (10168 kBps) [2024-11-18T12:04:28.827Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-18T12:04:29.782Z] Copying: 594/1024 [MB] (46 MBps) [2024-11-18T12:04:30.724Z] Copying: 605/1024 [MB] (11 MBps) [2024-11-18T12:04:31.667Z] Copying: 625/1024 [MB] (19 MBps) [2024-11-18T12:04:32.612Z] Copying: 656/1024 [MB] (30 MBps) [2024-11-18T12:04:33.556Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-18T12:04:34.943Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-18T12:04:35.890Z] Copying: 697/1024 [MB] (19 MBps) [2024-11-18T12:04:36.833Z] Copying: 707/1024 [MB] (10 MBps) [2024-11-18T12:04:37.774Z] Copying: 720/1024 [MB] (12 MBps) [2024-11-18T12:04:38.714Z] Copying: 730/1024 [MB] (10 MBps) [2024-11-18T12:04:39.659Z] Copying: 749/1024 [MB] (19 MBps) [2024-11-18T12:04:40.603Z] Copying: 769/1024 [MB] (20 MBps) [2024-11-18T12:04:41.548Z] Copying: 783/1024 [MB] (13 MBps) [2024-11-18T12:04:42.935Z] Copying: 794/1024 [MB] (11 MBps) [2024-11-18T12:04:43.872Z] Copying: 805/1024 [MB] (10 MBps) [2024-11-18T12:04:44.810Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-18T12:04:45.751Z] Copying: 854/1024 [MB] (37 MBps) [2024-11-18T12:04:46.693Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-18T12:04:47.638Z] Copying: 884/1024 [MB] (19 MBps) [2024-11-18T12:04:48.581Z] Copying: 905/1024 [MB] (21 MBps) [2024-11-18T12:04:49.965Z] Copying: 920/1024 [MB] (15 MBps) [2024-11-18T12:04:50.564Z] Copying: 932/1024 [MB] (11 MBps) [2024-11-18T12:04:51.951Z] Copying: 949/1024 [MB] (16 MBps) [2024-11-18T12:04:52.894Z] Copying: 959/1024 [MB] (10 MBps) [2024-11-18T12:04:53.838Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-18T12:04:54.783Z] Copying: 981/1024 [MB] (11 MBps) [2024-11-18T12:04:55.726Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-18T12:04:56.660Z] Copying: 1001/1024 [MB] (10 MBps) [2024-11-18T12:04:56.920Z] Copying: 1023/1024 [MB] (22 MBps) [2024-11-18T12:04:56.920Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 12:04:56.812483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.219 [2024-11-18 12:04:56.812541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:59.219 [2024-11-18 12:04:56.812555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:59.219 [2024-11-18 12:04:56.812572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.219 [2024-11-18 12:04:56.812614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.219 [2024-11-18 12:04:56.815292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.219 [2024-11-18 12:04:56.815320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:59.219 [2024-11-18 12:04:56.815331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.663 ms 00:21:59.219 [2024-11-18 12:04:56.815339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.219 [2024-11-18 12:04:56.826858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.219 [2024-11-18 12:04:56.826895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:59.219 [2024-11-18 12:04:56.826906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.922 ms 00:21:59.219 [2024-11-18 12:04:56.826919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.219 [2024-11-18 12:04:56.847349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.219 [2024-11-18 12:04:56.847386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:59.219 [2024-11-18 12:04:56.847398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.412 ms 00:21:59.219 [2024-11-18 12:04:56.847405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.219 [2024-11-18 12:04:56.853529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.219 [2024-11-18 12:04:56.853560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:59.219 [2024-11-18 12:04:56.853570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.096 ms 00:21:59.219 [2024-11-18 12:04:56.853578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.219 [2024-11-18 12:04:56.878067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.219 [2024-11-18 12:04:56.878105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:59.219 [2024-11-18 12:04:56.878116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.427 ms 00:21:59.219 [2024-11-18 12:04:56.878124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.219 [2024-11-18 12:04:56.892787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.219 [2024-11-18 12:04:56.892842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:59.219 [2024-11-18 12:04:56.892854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.628 ms 00:21:59.219 [2024-11-18 12:04:56.892862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.798 [2024-11-18 12:04:57.197299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.798 [2024-11-18 12:04:57.197352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:59.798 [2024-11-18 12:04:57.197366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 304.411 ms 00:21:59.798 [2024-11-18 12:04:57.197376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.798 [2024-11-18 12:04:57.223143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.798 [2024-11-18 12:04:57.223191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:59.798 [2024-11-18 12:04:57.223205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.749 ms 00:21:59.798 [2024-11-18 12:04:57.223212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.798 [2024-11-18 12:04:57.248275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.798 [2024-11-18 12:04:57.248335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:59.798 [2024-11-18 12:04:57.248347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.019 ms 00:21:59.798 [2024-11-18 12:04:57.248354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.798 [2024-11-18 12:04:57.272689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.798 [2024-11-18 12:04:57.272735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:59.798 [2024-11-18 12:04:57.272747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.287 ms 00:21:59.798 [2024-11-18 12:04:57.272754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.798 [2024-11-18 12:04:57.297045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.798 [2024-11-18 12:04:57.297092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:59.798 [2024-11-18 12:04:57.297103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.205 ms 00:21:59.798 [2024-11-18 12:04:57.297112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.798 [2024-11-18 12:04:57.297154] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:59.798 [2024-11-18 12:04:57.297170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105216 / 261120 wr_cnt: 1 state: open 00:21:59.798 [2024-11-18 12:04:57.297181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:59.798 [2024-11-18 12:04:57.297524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.297996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.298007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.298026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.298037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.298047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.298056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:59.799 [2024-11-18 12:04:57.298074] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:59.799 [2024-11-18 12:04:57.298095] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 460b762a-5884-46ae-bbdc-38ff9e82ccce 00:21:59.799 [2024-11-18 12:04:57.298105] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105216 00:21:59.799 [2024-11-18 12:04:57.298114] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106176 00:21:59.799 [2024-11-18 12:04:57.298123] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105216 00:21:59.799 [2024-11-18 12:04:57.298133] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:21:59.799 [2024-11-18 12:04:57.298141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:59.799 [2024-11-18 12:04:57.298155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:59.799 [2024-11-18 12:04:57.298172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:59.799 [2024-11-18 12:04:57.298179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:59.799 [2024-11-18 12:04:57.298185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:59.799 [2024-11-18 12:04:57.298193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.799 [2024-11-18 12:04:57.298201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:59.799 [2024-11-18 12:04:57.298210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:21:59.799 [2024-11-18 12:04:57.298220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.799 [2024-11-18 12:04:57.311974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.799 [2024-11-18 12:04:57.312014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:59.799 [2024-11-18 12:04:57.312025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.735 ms 00:21:59.799 [2024-11-18 12:04:57.312039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.799 [2024-11-18 12:04:57.312437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.799 [2024-11-18 12:04:57.312456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:59.799 [2024-11-18 12:04:57.312466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:21:59.799 [2024-11-18 12:04:57.312474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.799 [2024-11-18 12:04:57.349204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.799 [2024-11-18 12:04:57.349250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.799 [2024-11-18 12:04:57.349268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.799 [2024-11-18 12:04:57.349278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.799 [2024-11-18 12:04:57.349351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.799 [2024-11-18 12:04:57.349362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.799 [2024-11-18 12:04:57.349372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.799 [2024-11-18 12:04:57.349381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.799 [2024-11-18 12:04:57.349442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.799 [2024-11-18 12:04:57.349455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.799 [2024-11-18 12:04:57.349468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.799 [2024-11-18 12:04:57.349481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.799 [2024-11-18 12:04:57.349498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.799 [2024-11-18 12:04:57.349507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.799 [2024-11-18 12:04:57.349515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.799 [2024-11-18 12:04:57.349523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.799 [2024-11-18 12:04:57.433163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.800 [2024-11-18 12:04:57.433215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.800 [2024-11-18 12:04:57.433236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.800 [2024-11-18 12:04:57.433245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.061 [2024-11-18 12:04:57.501437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.061 [2024-11-18 12:04:57.501498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.061 [2024-11-18 12:04:57.501512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.061 [2024-11-18 12:04:57.501521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.061 [2024-11-18 12:04:57.501626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.061 [2024-11-18 12:04:57.501637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.061 [2024-11-18 12:04:57.501647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.061 [2024-11-18 12:04:57.501656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.061 [2024-11-18 12:04:57.501700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.061 [2024-11-18 12:04:57.501711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.061 [2024-11-18 12:04:57.501720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.061 [2024-11-18 12:04:57.501728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.061 [2024-11-18 12:04:57.501824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.061 [2024-11-18 12:04:57.501836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.061 [2024-11-18 12:04:57.501845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.061 [2024-11-18 12:04:57.501853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.061 [2024-11-18 12:04:57.501893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.061 [2024-11-18 12:04:57.501904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.061 [2024-11-18 12:04:57.501913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.061 [2024-11-18 12:04:57.501921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.061 [2024-11-18 12:04:57.501963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.061 [2024-11-18 12:04:57.501975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.061 [2024-11-18 12:04:57.501984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.061 [2024-11-18 12:04:57.501993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.061 [2024-11-18 12:04:57.502043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.061 [2024-11-18 12:04:57.502053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.061 [2024-11-18 12:04:57.502063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.061 [2024-11-18 12:04:57.502071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.061 [2024-11-18 12:04:57.502205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 689.680 ms, result 0 00:22:01.978 00:22:01.978 00:22:01.978 12:04:59 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:01.978 [2024-11-18 12:04:59.294972] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:22:01.978 [2024-11-18 12:04:59.295128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76720 ] 00:22:01.978 [2024-11-18 12:04:59.456396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.978 [2024-11-18 12:04:59.577040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.239 [2024-11-18 12:04:59.867520] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.239 [2024-11-18 12:04:59.867618] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.502 [2024-11-18 12:05:00.029213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.502 [2024-11-18 12:05:00.029277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.502 [2024-11-18 12:05:00.029299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.502 [2024-11-18 12:05:00.029308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.502 [2024-11-18 12:05:00.029363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.502 [2024-11-18 12:05:00.029374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.502 [2024-11-18 12:05:00.029386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:02.502 [2024-11-18 12:05:00.029395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.502 [2024-11-18 12:05:00.029416] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.502 [2024-11-18 12:05:00.030255] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.502 [2024-11-18 12:05:00.030299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.502 [2024-11-18 12:05:00.030309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.502 [2024-11-18 12:05:00.030319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:22:02.502 [2024-11-18 12:05:00.030327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.032669] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.503 [2024-11-18 12:05:00.046784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.046839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.503 [2024-11-18 12:05:00.046854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.118 ms 00:22:02.503 [2024-11-18 12:05:00.046864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.046943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.046954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.503 [2024-11-18 12:05:00.046963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:02.503 [2024-11-18 12:05:00.046971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.054861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.054904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.503 [2024-11-18 12:05:00.054915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.811 ms 00:22:02.503 [2024-11-18 12:05:00.054923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.055009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.055018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.503 [2024-11-18 12:05:00.055027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:02.503 [2024-11-18 12:05:00.055035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.055079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.055089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.503 [2024-11-18 12:05:00.055098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:02.503 [2024-11-18 12:05:00.055107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.055132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.503 [2024-11-18 12:05:00.059118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.059166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.503 [2024-11-18 12:05:00.059177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.994 ms 00:22:02.503 [2024-11-18 12:05:00.059189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.059224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.059232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.503 [2024-11-18 12:05:00.059241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:02.503 [2024-11-18 12:05:00.059249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.059299] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.503 [2024-11-18 12:05:00.059322] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.503 [2024-11-18 12:05:00.059360] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.503 [2024-11-18 12:05:00.059381] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:02.503 [2024-11-18 12:05:00.059514] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.503 [2024-11-18 12:05:00.059528] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.503 [2024-11-18 12:05:00.059539] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.503 [2024-11-18 12:05:00.059550] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.503 [2024-11-18 12:05:00.059560] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.503 [2024-11-18 12:05:00.059569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.503 [2024-11-18 12:05:00.059577] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.503 [2024-11-18 12:05:00.059601] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.503 [2024-11-18 12:05:00.059609] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.503 [2024-11-18 12:05:00.059620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.059628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.503 [2024-11-18 12:05:00.059636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:22:02.503 [2024-11-18 12:05:00.059646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.059729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.503 [2024-11-18 12:05:00.059739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.503 [2024-11-18 12:05:00.059747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:02.503 [2024-11-18 12:05:00.059756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.503 [2024-11-18 12:05:00.059863] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.503 [2024-11-18 12:05:00.059876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.503 [2024-11-18 12:05:00.059886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.503 [2024-11-18 12:05:00.059895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.503 [2024-11-18 12:05:00.059903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.503 [2024-11-18 12:05:00.059912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.503 [2024-11-18 12:05:00.059920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.503 [2024-11-18 12:05:00.059928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.503 [2024-11-18 12:05:00.059937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.503 [2024-11-18 12:05:00.059944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.503 [2024-11-18 12:05:00.059952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.503 [2024-11-18 12:05:00.059960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.503 [2024-11-18 12:05:00.059968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.503 [2024-11-18 12:05:00.059976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.503 [2024-11-18 12:05:00.059984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.503 [2024-11-18 12:05:00.059998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.503 [2024-11-18 12:05:00.060005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.503 [2024-11-18 12:05:00.060014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.503 [2024-11-18 12:05:00.060020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.503 [2024-11-18 12:05:00.060028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.503 [2024-11-18 12:05:00.060035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.503 [2024-11-18 12:05:00.060043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.503 [2024-11-18 12:05:00.060050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.503 [2024-11-18 12:05:00.060058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.503 [2024-11-18 12:05:00.060064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.503 [2024-11-18 12:05:00.060072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.503 [2024-11-18 12:05:00.060079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.503 [2024-11-18 12:05:00.060086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.503 [2024-11-18 12:05:00.060094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.503 [2024-11-18 12:05:00.060101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.503 [2024-11-18 12:05:00.060107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.503 [2024-11-18 12:05:00.060114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.503 [2024-11-18 12:05:00.060121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.503 [2024-11-18 12:05:00.060128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.503 [2024-11-18 12:05:00.060135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.503 [2024-11-18 12:05:00.060141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.503 [2024-11-18 12:05:00.060148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.503 [2024-11-18 12:05:00.060155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.503 [2024-11-18 12:05:00.060161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.503 [2024-11-18 12:05:00.060168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.504 [2024-11-18 12:05:00.060175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.504 [2024-11-18 12:05:00.060181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.504 [2024-11-18 12:05:00.060188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.504 [2024-11-18 12:05:00.060197] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.504 [2024-11-18 12:05:00.060204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.504 [2024-11-18 12:05:00.060212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.504 [2024-11-18 12:05:00.060221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.504 [2024-11-18 12:05:00.060229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.504 [2024-11-18 12:05:00.060236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.504 [2024-11-18 12:05:00.060243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.504 [2024-11-18 12:05:00.060251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.504 [2024-11-18 12:05:00.060258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.504 [2024-11-18 12:05:00.060266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.504 [2024-11-18 12:05:00.060274] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.504 [2024-11-18 12:05:00.060285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.504 [2024-11-18 12:05:00.060295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.504 [2024-11-18 12:05:00.060303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.504 [2024-11-18 12:05:00.060311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.504 [2024-11-18 12:05:00.060319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.504 [2024-11-18 12:05:00.060327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.504 [2024-11-18 12:05:00.060335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.504 [2024-11-18 12:05:00.060343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.504 [2024-11-18 12:05:00.060350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.504 [2024-11-18 12:05:00.060358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.504 [2024-11-18 12:05:00.060364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.504 [2024-11-18 12:05:00.060371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.504 [2024-11-18 12:05:00.060378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.504 [2024-11-18 12:05:00.060385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.504 [2024-11-18 12:05:00.060392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.504 [2024-11-18 12:05:00.060400] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.504 [2024-11-18 12:05:00.060412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.504 [2024-11-18 12:05:00.060421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.504 [2024-11-18 12:05:00.060428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.504 [2024-11-18 12:05:00.060436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.504 [2024-11-18 12:05:00.060443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.504 [2024-11-18 12:05:00.060451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.060459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.504 [2024-11-18 12:05:00.060467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:22:02.504 [2024-11-18 12:05:00.060475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.091862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.091908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.504 [2024-11-18 12:05:00.091921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.343 ms 00:22:02.504 [2024-11-18 12:05:00.091930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.092023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.092032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.504 [2024-11-18 12:05:00.092041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:02.504 [2024-11-18 12:05:00.092050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.139633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.139688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.504 [2024-11-18 12:05:00.139702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.526 ms 00:22:02.504 [2024-11-18 12:05:00.139710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.139758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.139768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.504 [2024-11-18 12:05:00.139778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:02.504 [2024-11-18 12:05:00.139790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.140344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.140388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.504 [2024-11-18 12:05:00.140400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:22:02.504 [2024-11-18 12:05:00.140408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.140562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.140573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.504 [2024-11-18 12:05:00.140600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:22:02.504 [2024-11-18 12:05:00.140616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.156127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.156168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.504 [2024-11-18 12:05:00.156182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.487 ms 00:22:02.504 [2024-11-18 12:05:00.156190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.170534] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:02.504 [2024-11-18 12:05:00.170592] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:02.504 [2024-11-18 12:05:00.170606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.170614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:02.504 [2024-11-18 12:05:00.170624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.314 ms 00:22:02.504 [2024-11-18 12:05:00.170631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.504 [2024-11-18 12:05:00.195826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.504 [2024-11-18 12:05:00.195880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:02.504 [2024-11-18 12:05:00.195892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.137 ms 00:22:02.504 [2024-11-18 12:05:00.195901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.767 [2024-11-18 12:05:00.208609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.208666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:02.768 [2024-11-18 12:05:00.208678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.655 ms 00:22:02.768 [2024-11-18 12:05:00.208685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.221182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.221227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:02.768 [2024-11-18 12:05:00.221239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.451 ms 00:22:02.768 [2024-11-18 12:05:00.221246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.221906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.221939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:02.768 [2024-11-18 12:05:00.221949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:22:02.768 [2024-11-18 12:05:00.221961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.286105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.286171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.768 [2024-11-18 12:05:00.286194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.124 ms 00:22:02.768 [2024-11-18 12:05:00.286203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.297174] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:02.768 [2024-11-18 12:05:00.300165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.300208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.768 [2024-11-18 12:05:00.300220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.909 ms 00:22:02.768 [2024-11-18 12:05:00.300229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.300308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.300320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.768 [2024-11-18 12:05:00.300330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:02.768 [2024-11-18 12:05:00.300342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.302025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.302070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.768 [2024-11-18 12:05:00.302081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:22:02.768 [2024-11-18 12:05:00.302089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.302117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.302126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.768 [2024-11-18 12:05:00.302135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.768 [2024-11-18 12:05:00.302143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.302185] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.768 [2024-11-18 12:05:00.302199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.302208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.768 [2024-11-18 12:05:00.302217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:02.768 [2024-11-18 12:05:00.302227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.327341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.327392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.768 [2024-11-18 12:05:00.327406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.094 ms 00:22:02.768 [2024-11-18 12:05:00.327420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.327522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.768 [2024-11-18 12:05:00.327534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.768 [2024-11-18 12:05:00.327543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:02.768 [2024-11-18 12:05:00.327552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.768 [2024-11-18 12:05:00.328937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.208 ms, result 0 00:22:04.157  [2024-11-18T12:05:02.803Z] Copying: 8192/1048576 [kB] (8192 kBps) [2024-11-18T12:05:03.748Z] Copying: 21/1024 [MB] (13 MBps) [2024-11-18T12:05:04.692Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-18T12:05:05.640Z] Copying: 45/1024 [MB] (11 MBps) [2024-11-18T12:05:06.583Z] Copying: 57/1024 [MB] (12 MBps) [2024-11-18T12:05:07.526Z] Copying: 74/1024 [MB] (16 MBps) [2024-11-18T12:05:08.918Z] Copying: 91/1024 [MB] (17 MBps) [2024-11-18T12:05:09.860Z] Copying: 110/1024 [MB] (18 MBps) [2024-11-18T12:05:10.802Z] Copying: 130/1024 [MB] (20 MBps) [2024-11-18T12:05:11.745Z] Copying: 147/1024 [MB] (16 MBps) [2024-11-18T12:05:12.688Z] Copying: 165/1024 [MB] (17 MBps) [2024-11-18T12:05:13.630Z] Copying: 186/1024 [MB] (21 MBps) [2024-11-18T12:05:14.574Z] Copying: 207/1024 [MB] (20 MBps) [2024-11-18T12:05:15.961Z] Copying: 221/1024 [MB] (13 MBps) [2024-11-18T12:05:16.534Z] Copying: 234/1024 [MB] (13 MBps) [2024-11-18T12:05:17.922Z] Copying: 245/1024 [MB] (10 MBps) [2024-11-18T12:05:18.917Z] Copying: 255/1024 [MB] (10 MBps) [2024-11-18T12:05:19.883Z] Copying: 266/1024 [MB] (10 MBps) [2024-11-18T12:05:20.826Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-18T12:05:21.772Z] Copying: 287/1024 [MB] (11 MBps) [2024-11-18T12:05:22.716Z] Copying: 317/1024 [MB] (29 MBps) [2024-11-18T12:05:23.662Z] Copying: 329/1024 [MB] (11 MBps) [2024-11-18T12:05:24.608Z] Copying: 339/1024 [MB] (10 MBps) [2024-11-18T12:05:25.550Z] Copying: 350/1024 [MB] (10 MBps) [2024-11-18T12:05:26.939Z] Copying: 364/1024 [MB] (14 MBps) [2024-11-18T12:05:27.885Z] Copying: 382/1024 [MB] (17 MBps) [2024-11-18T12:05:28.832Z] Copying: 392/1024 [MB] (10 MBps) [2024-11-18T12:05:29.780Z] Copying: 410/1024 [MB] (18 MBps) [2024-11-18T12:05:30.726Z] Copying: 430/1024 [MB] (19 MBps) [2024-11-18T12:05:31.673Z] Copying: 441/1024 [MB] (11 MBps) [2024-11-18T12:05:32.617Z] Copying: 455/1024 [MB] (13 MBps) [2024-11-18T12:05:33.556Z] Copying: 471/1024 [MB] (16 MBps) [2024-11-18T12:05:34.946Z] Copying: 493/1024 [MB] (21 MBps) [2024-11-18T12:05:35.890Z] Copying: 508/1024 [MB] (14 MBps) [2024-11-18T12:05:36.829Z] Copying: 527/1024 [MB] (19 MBps) [2024-11-18T12:05:37.771Z] Copying: 547/1024 [MB] (20 MBps) [2024-11-18T12:05:38.714Z] Copying: 564/1024 [MB] (16 MBps) [2024-11-18T12:05:39.659Z] Copying: 579/1024 [MB] (15 MBps) [2024-11-18T12:05:40.605Z] Copying: 598/1024 [MB] (18 MBps) [2024-11-18T12:05:41.550Z] Copying: 609/1024 [MB] (10 MBps) [2024-11-18T12:05:42.941Z] Copying: 626/1024 [MB] (16 MBps) [2024-11-18T12:05:43.887Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-18T12:05:44.832Z] Copying: 650/1024 [MB] (14 MBps) [2024-11-18T12:05:45.775Z] Copying: 664/1024 [MB] (13 MBps) [2024-11-18T12:05:46.720Z] Copying: 674/1024 [MB] (10 MBps) [2024-11-18T12:05:47.670Z] Copying: 684/1024 [MB] (10 MBps) [2024-11-18T12:05:48.677Z] Copying: 695/1024 [MB] (10 MBps) [2024-11-18T12:05:49.622Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-18T12:05:50.567Z] Copying: 715/1024 [MB] (10 MBps) [2024-11-18T12:05:51.958Z] Copying: 725/1024 [MB] (10 MBps) [2024-11-18T12:05:52.533Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-18T12:05:53.922Z] Copying: 754/1024 [MB] (17 MBps) [2024-11-18T12:05:54.867Z] Copying: 764/1024 [MB] (10 MBps) [2024-11-18T12:05:55.812Z] Copying: 775/1024 [MB] (10 MBps) [2024-11-18T12:05:56.755Z] Copying: 785/1024 [MB] (10 MBps) [2024-11-18T12:05:57.723Z] Copying: 796/1024 [MB] (11 MBps) [2024-11-18T12:05:58.666Z] Copying: 810/1024 [MB] (14 MBps) [2024-11-18T12:05:59.610Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-18T12:06:00.552Z] Copying: 832/1024 [MB] (11 MBps) [2024-11-18T12:06:01.941Z] Copying: 852/1024 [MB] (19 MBps) [2024-11-18T12:06:02.885Z] Copying: 868/1024 [MB] (15 MBps) [2024-11-18T12:06:03.828Z] Copying: 884/1024 [MB] (16 MBps) [2024-11-18T12:06:04.773Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-18T12:06:05.717Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-18T12:06:06.664Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-18T12:06:07.609Z] Copying: 926/1024 [MB] (10 MBps) [2024-11-18T12:06:08.555Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-18T12:06:09.945Z] Copying: 948/1024 [MB] (11 MBps) [2024-11-18T12:06:10.884Z] Copying: 967/1024 [MB] (19 MBps) [2024-11-18T12:06:11.827Z] Copying: 981/1024 [MB] (13 MBps) [2024-11-18T12:06:12.769Z] Copying: 994/1024 [MB] (12 MBps) [2024-11-18T12:06:13.714Z] Copying: 1009/1024 [MB] (15 MBps) [2024-11-18T12:06:13.714Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 12:06:13.559552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.013 [2024-11-18 12:06:13.559670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:16.013 [2024-11-18 12:06:13.559689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:16.013 [2024-11-18 12:06:13.559702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.013 [2024-11-18 12:06:13.559745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:16.013 [2024-11-18 12:06:13.563216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.013 [2024-11-18 12:06:13.563265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:16.013 [2024-11-18 12:06:13.563279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.451 ms 00:23:16.013 [2024-11-18 12:06:13.563290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.013 [2024-11-18 12:06:13.563602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.013 [2024-11-18 12:06:13.563702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:16.013 [2024-11-18 12:06:13.563714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:23:16.013 [2024-11-18 12:06:13.563724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.013 [2024-11-18 12:06:13.570814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.013 [2024-11-18 12:06:13.570867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:16.013 [2024-11-18 12:06:13.570879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.063 ms 00:23:16.013 [2024-11-18 12:06:13.570889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.013 [2024-11-18 12:06:13.577086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.013 [2024-11-18 12:06:13.577129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:16.013 [2024-11-18 12:06:13.577141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.151 ms 00:23:16.013 [2024-11-18 12:06:13.577149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.013 [2024-11-18 12:06:13.604552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.013 [2024-11-18 12:06:13.604619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:16.013 [2024-11-18 12:06:13.604632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.348 ms 00:23:16.013 [2024-11-18 12:06:13.604640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.013 [2024-11-18 12:06:13.620030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.013 [2024-11-18 12:06:13.620086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:16.013 [2024-11-18 12:06:13.620100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.343 ms 00:23:16.013 [2024-11-18 12:06:13.620109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.276 [2024-11-18 12:06:13.896378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.276 [2024-11-18 12:06:13.896434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:16.276 [2024-11-18 12:06:13.896448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 276.213 ms 00:23:16.276 [2024-11-18 12:06:13.896457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.276 [2024-11-18 12:06:13.922924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.276 [2024-11-18 12:06:13.922974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:16.276 [2024-11-18 12:06:13.922987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.451 ms 00:23:16.276 [2024-11-18 12:06:13.922995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.276 [2024-11-18 12:06:13.948248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.276 [2024-11-18 12:06:13.948293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:16.276 [2024-11-18 12:06:13.948319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.206 ms 00:23:16.276 [2024-11-18 12:06:13.948327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.540 [2024-11-18 12:06:13.976536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.540 [2024-11-18 12:06:13.976593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:16.540 [2024-11-18 12:06:13.976607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.808 ms 00:23:16.540 [2024-11-18 12:06:13.976615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.540 [2024-11-18 12:06:14.001569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.540 [2024-11-18 12:06:14.001631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:16.540 [2024-11-18 12:06:14.001644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.881 ms 00:23:16.540 [2024-11-18 12:06:14.001651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.540 [2024-11-18 12:06:14.001696] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:16.540 [2024-11-18 12:06:14.001712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:16.540 [2024-11-18 12:06:14.001724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:16.540 [2024-11-18 12:06:14.001797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.001997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:16.541 [2024-11-18 12:06:14.002381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:16.542 [2024-11-18 12:06:14.002527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:16.542 [2024-11-18 12:06:14.002535] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 460b762a-5884-46ae-bbdc-38ff9e82ccce 00:23:16.542 [2024-11-18 12:06:14.002545] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:16.542 [2024-11-18 12:06:14.002553] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 26816 00:23:16.542 [2024-11-18 12:06:14.002560] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 25856 00:23:16.542 [2024-11-18 12:06:14.002570] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0371 00:23:16.542 [2024-11-18 12:06:14.002578] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:16.542 [2024-11-18 12:06:14.002603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:16.542 [2024-11-18 12:06:14.002611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:16.542 [2024-11-18 12:06:14.002625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:16.542 [2024-11-18 12:06:14.002632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:16.542 [2024-11-18 12:06:14.002640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.542 [2024-11-18 12:06:14.002648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:16.542 [2024-11-18 12:06:14.002657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:23:16.542 [2024-11-18 12:06:14.002664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.016204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.542 [2024-11-18 12:06:14.016245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:16.542 [2024-11-18 12:06:14.016256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.520 ms 00:23:16.542 [2024-11-18 12:06:14.016271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.016693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.542 [2024-11-18 12:06:14.016716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:16.542 [2024-11-18 12:06:14.016726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:23:16.542 [2024-11-18 12:06:14.016733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.053201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.053251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.542 [2024-11-18 12:06:14.053270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.053279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.053351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.053361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.542 [2024-11-18 12:06:14.053370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.053380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.053450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.053462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.542 [2024-11-18 12:06:14.053472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.053485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.053501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.053510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.542 [2024-11-18 12:06:14.053518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.053526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.137540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.137614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.542 [2024-11-18 12:06:14.137636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.137645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.205762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.205821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.542 [2024-11-18 12:06:14.205833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.205842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.205916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.205928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.542 [2024-11-18 12:06:14.205937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.205947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.205993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.206004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.542 [2024-11-18 12:06:14.206013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.206021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.206120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.206131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.542 [2024-11-18 12:06:14.206140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.206148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.206183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.206192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:16.542 [2024-11-18 12:06:14.206202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.206209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.206251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.206261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.542 [2024-11-18 12:06:14.206269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.206278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.206327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.542 [2024-11-18 12:06:14.206338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.542 [2024-11-18 12:06:14.206347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.542 [2024-11-18 12:06:14.206355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.542 [2024-11-18 12:06:14.206493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 646.905 ms, result 0 00:23:17.486 00:23:17.486 00:23:17.486 12:06:14 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:20.100 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74435 00:23:20.100 12:06:17 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74435 ']' 00:23:20.100 12:06:17 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74435 00:23:20.100 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74435) - No such process 00:23:20.100 Process with pid 74435 is not found 00:23:20.100 Remove shared memory files 00:23:20.100 12:06:17 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74435 is not found' 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:20.100 12:06:17 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:20.100 00:23:20.100 real 4m58.267s 00:23:20.100 user 4m45.530s 00:23:20.100 sys 0m12.512s 00:23:20.100 12:06:17 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:20.100 ************************************ 00:23:20.100 END TEST ftl_restore 00:23:20.100 ************************************ 00:23:20.100 12:06:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:20.100 12:06:17 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:20.100 12:06:17 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:20.100 12:06:17 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:20.100 12:06:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:20.100 ************************************ 00:23:20.100 START TEST ftl_dirty_shutdown 00:23:20.100 ************************************ 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:20.100 * Looking for test storage... 00:23:20.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:20.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.100 --rc genhtml_branch_coverage=1 00:23:20.100 --rc genhtml_function_coverage=1 00:23:20.100 --rc genhtml_legend=1 00:23:20.100 --rc geninfo_all_blocks=1 00:23:20.100 --rc geninfo_unexecuted_blocks=1 00:23:20.100 00:23:20.100 ' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:20.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.100 --rc genhtml_branch_coverage=1 00:23:20.100 --rc genhtml_function_coverage=1 00:23:20.100 --rc genhtml_legend=1 00:23:20.100 --rc geninfo_all_blocks=1 00:23:20.100 --rc geninfo_unexecuted_blocks=1 00:23:20.100 00:23:20.100 ' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:20.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.100 --rc genhtml_branch_coverage=1 00:23:20.100 --rc genhtml_function_coverage=1 00:23:20.100 --rc genhtml_legend=1 00:23:20.100 --rc geninfo_all_blocks=1 00:23:20.100 --rc geninfo_unexecuted_blocks=1 00:23:20.100 00:23:20.100 ' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:20.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.100 --rc genhtml_branch_coverage=1 00:23:20.100 --rc genhtml_function_coverage=1 00:23:20.100 --rc genhtml_legend=1 00:23:20.100 --rc geninfo_all_blocks=1 00:23:20.100 --rc geninfo_unexecuted_blocks=1 00:23:20.100 00:23:20.100 ' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.100 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77590 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77590 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 77590 ']' 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:20.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:20.101 12:06:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:20.101 [2024-11-18 12:06:17.652211] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:23:20.101 [2024-11-18 12:06:17.652362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77590 ] 00:23:20.362 [2024-11-18 12:06:17.814640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.362 [2024-11-18 12:06:17.932489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.933 12:06:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:20.933 12:06:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:23:20.933 12:06:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:20.933 12:06:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:20.933 12:06:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:20.933 12:06:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:20.933 12:06:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:20.933 12:06:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:21.504 12:06:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:21.504 12:06:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:21.504 12:06:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:21.504 12:06:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:21.504 12:06:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:21.504 12:06:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:21.504 12:06:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:21.504 12:06:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:21.504 12:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:21.504 { 00:23:21.504 "name": "nvme0n1", 00:23:21.504 "aliases": [ 00:23:21.504 "3826247f-daf6-4d26-8972-b064d213a65f" 00:23:21.504 ], 00:23:21.504 "product_name": "NVMe disk", 00:23:21.504 "block_size": 4096, 00:23:21.504 "num_blocks": 1310720, 00:23:21.504 "uuid": "3826247f-daf6-4d26-8972-b064d213a65f", 00:23:21.504 "numa_id": -1, 00:23:21.504 "assigned_rate_limits": { 00:23:21.504 "rw_ios_per_sec": 0, 00:23:21.504 "rw_mbytes_per_sec": 0, 00:23:21.504 "r_mbytes_per_sec": 0, 00:23:21.504 "w_mbytes_per_sec": 0 00:23:21.504 }, 00:23:21.504 "claimed": true, 00:23:21.504 "claim_type": "read_many_write_one", 00:23:21.504 "zoned": false, 00:23:21.504 "supported_io_types": { 00:23:21.504 "read": true, 00:23:21.504 "write": true, 00:23:21.504 "unmap": true, 00:23:21.504 "flush": true, 00:23:21.504 "reset": true, 00:23:21.504 "nvme_admin": true, 00:23:21.504 "nvme_io": true, 00:23:21.504 "nvme_io_md": false, 00:23:21.504 "write_zeroes": true, 00:23:21.504 "zcopy": false, 00:23:21.504 "get_zone_info": false, 00:23:21.504 "zone_management": false, 00:23:21.504 "zone_append": false, 00:23:21.504 "compare": true, 00:23:21.504 "compare_and_write": false, 00:23:21.504 "abort": true, 00:23:21.504 "seek_hole": false, 00:23:21.504 "seek_data": false, 00:23:21.504 "copy": true, 00:23:21.504 "nvme_iov_md": false 00:23:21.504 }, 00:23:21.504 "driver_specific": { 00:23:21.504 "nvme": [ 00:23:21.504 { 00:23:21.504 "pci_address": "0000:00:11.0", 00:23:21.504 "trid": { 00:23:21.504 "trtype": "PCIe", 00:23:21.504 "traddr": "0000:00:11.0" 00:23:21.504 }, 00:23:21.504 "ctrlr_data": { 00:23:21.504 "cntlid": 0, 00:23:21.504 "vendor_id": "0x1b36", 00:23:21.504 "model_number": "QEMU NVMe Ctrl", 00:23:21.504 "serial_number": "12341", 00:23:21.504 "firmware_revision": "8.0.0", 00:23:21.504 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:21.504 "oacs": { 00:23:21.504 "security": 0, 00:23:21.504 "format": 1, 00:23:21.504 "firmware": 0, 00:23:21.504 "ns_manage": 1 00:23:21.504 }, 00:23:21.504 "multi_ctrlr": false, 00:23:21.504 "ana_reporting": false 00:23:21.504 }, 00:23:21.504 "vs": { 00:23:21.504 "nvme_version": "1.4" 00:23:21.504 }, 00:23:21.504 "ns_data": { 00:23:21.504 "id": 1, 00:23:21.504 "can_share": false 00:23:21.504 } 00:23:21.504 } 00:23:21.504 ], 00:23:21.504 "mp_policy": "active_passive" 00:23:21.504 } 00:23:21.504 } 00:23:21.504 ]' 00:23:21.504 12:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:21.504 12:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:21.504 12:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=3ada74ae-4f7d-400a-be50-119bafb80292 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:21.764 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ada74ae-4f7d-400a-be50-119bafb80292 00:23:22.023 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:22.281 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5be85932-ebc9-45a3-975b-297f666c3472 00:23:22.281 12:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5be85932-ebc9-45a3-975b-297f666c3472 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=9901608d-c1c2-4032-9881-1b9d0595837d 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9901608d-c1c2-4032-9881-1b9d0595837d 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=9901608d-c1c2-4032-9881-1b9d0595837d 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9901608d-c1c2-4032-9881-1b9d0595837d 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=9901608d-c1c2-4032-9881-1b9d0595837d 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:22.540 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9901608d-c1c2-4032-9881-1b9d0595837d 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:22.799 { 00:23:22.799 "name": "9901608d-c1c2-4032-9881-1b9d0595837d", 00:23:22.799 "aliases": [ 00:23:22.799 "lvs/nvme0n1p0" 00:23:22.799 ], 00:23:22.799 "product_name": "Logical Volume", 00:23:22.799 "block_size": 4096, 00:23:22.799 "num_blocks": 26476544, 00:23:22.799 "uuid": "9901608d-c1c2-4032-9881-1b9d0595837d", 00:23:22.799 "assigned_rate_limits": { 00:23:22.799 "rw_ios_per_sec": 0, 00:23:22.799 "rw_mbytes_per_sec": 0, 00:23:22.799 "r_mbytes_per_sec": 0, 00:23:22.799 "w_mbytes_per_sec": 0 00:23:22.799 }, 00:23:22.799 "claimed": false, 00:23:22.799 "zoned": false, 00:23:22.799 "supported_io_types": { 00:23:22.799 "read": true, 00:23:22.799 "write": true, 00:23:22.799 "unmap": true, 00:23:22.799 "flush": false, 00:23:22.799 "reset": true, 00:23:22.799 "nvme_admin": false, 00:23:22.799 "nvme_io": false, 00:23:22.799 "nvme_io_md": false, 00:23:22.799 "write_zeroes": true, 00:23:22.799 "zcopy": false, 00:23:22.799 "get_zone_info": false, 00:23:22.799 "zone_management": false, 00:23:22.799 "zone_append": false, 00:23:22.799 "compare": false, 00:23:22.799 "compare_and_write": false, 00:23:22.799 "abort": false, 00:23:22.799 "seek_hole": true, 00:23:22.799 "seek_data": true, 00:23:22.799 "copy": false, 00:23:22.799 "nvme_iov_md": false 00:23:22.799 }, 00:23:22.799 "driver_specific": { 00:23:22.799 "lvol": { 00:23:22.799 "lvol_store_uuid": "5be85932-ebc9-45a3-975b-297f666c3472", 00:23:22.799 "base_bdev": "nvme0n1", 00:23:22.799 "thin_provision": true, 00:23:22.799 "num_allocated_clusters": 0, 00:23:22.799 "snapshot": false, 00:23:22.799 "clone": false, 00:23:22.799 "esnap_clone": false 00:23:22.799 } 00:23:22.799 } 00:23:22.799 } 00:23:22.799 ]' 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:22.799 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:23.057 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:23.057 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:23.057 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 9901608d-c1c2-4032-9881-1b9d0595837d 00:23:23.057 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=9901608d-c1c2-4032-9881-1b9d0595837d 00:23:23.057 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:23.057 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:23.057 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:23.057 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9901608d-c1c2-4032-9881-1b9d0595837d 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:23.316 { 00:23:23.316 "name": "9901608d-c1c2-4032-9881-1b9d0595837d", 00:23:23.316 "aliases": [ 00:23:23.316 "lvs/nvme0n1p0" 00:23:23.316 ], 00:23:23.316 "product_name": "Logical Volume", 00:23:23.316 "block_size": 4096, 00:23:23.316 "num_blocks": 26476544, 00:23:23.316 "uuid": "9901608d-c1c2-4032-9881-1b9d0595837d", 00:23:23.316 "assigned_rate_limits": { 00:23:23.316 "rw_ios_per_sec": 0, 00:23:23.316 "rw_mbytes_per_sec": 0, 00:23:23.316 "r_mbytes_per_sec": 0, 00:23:23.316 "w_mbytes_per_sec": 0 00:23:23.316 }, 00:23:23.316 "claimed": false, 00:23:23.316 "zoned": false, 00:23:23.316 "supported_io_types": { 00:23:23.316 "read": true, 00:23:23.316 "write": true, 00:23:23.316 "unmap": true, 00:23:23.316 "flush": false, 00:23:23.316 "reset": true, 00:23:23.316 "nvme_admin": false, 00:23:23.316 "nvme_io": false, 00:23:23.316 "nvme_io_md": false, 00:23:23.316 "write_zeroes": true, 00:23:23.316 "zcopy": false, 00:23:23.316 "get_zone_info": false, 00:23:23.316 "zone_management": false, 00:23:23.316 "zone_append": false, 00:23:23.316 "compare": false, 00:23:23.316 "compare_and_write": false, 00:23:23.316 "abort": false, 00:23:23.316 "seek_hole": true, 00:23:23.316 "seek_data": true, 00:23:23.316 "copy": false, 00:23:23.316 "nvme_iov_md": false 00:23:23.316 }, 00:23:23.316 "driver_specific": { 00:23:23.316 "lvol": { 00:23:23.316 "lvol_store_uuid": "5be85932-ebc9-45a3-975b-297f666c3472", 00:23:23.316 "base_bdev": "nvme0n1", 00:23:23.316 "thin_provision": true, 00:23:23.316 "num_allocated_clusters": 0, 00:23:23.316 "snapshot": false, 00:23:23.316 "clone": false, 00:23:23.316 "esnap_clone": false 00:23:23.316 } 00:23:23.316 } 00:23:23.316 } 00:23:23.316 ]' 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:23.316 12:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 9901608d-c1c2-4032-9881-1b9d0595837d 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=9901608d-c1c2-4032-9881-1b9d0595837d 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9901608d-c1c2-4032-9881-1b9d0595837d 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:23.575 { 00:23:23.575 "name": "9901608d-c1c2-4032-9881-1b9d0595837d", 00:23:23.575 "aliases": [ 00:23:23.575 "lvs/nvme0n1p0" 00:23:23.575 ], 00:23:23.575 "product_name": "Logical Volume", 00:23:23.575 "block_size": 4096, 00:23:23.575 "num_blocks": 26476544, 00:23:23.575 "uuid": "9901608d-c1c2-4032-9881-1b9d0595837d", 00:23:23.575 "assigned_rate_limits": { 00:23:23.575 "rw_ios_per_sec": 0, 00:23:23.575 "rw_mbytes_per_sec": 0, 00:23:23.575 "r_mbytes_per_sec": 0, 00:23:23.575 "w_mbytes_per_sec": 0 00:23:23.575 }, 00:23:23.575 "claimed": false, 00:23:23.575 "zoned": false, 00:23:23.575 "supported_io_types": { 00:23:23.575 "read": true, 00:23:23.575 "write": true, 00:23:23.575 "unmap": true, 00:23:23.575 "flush": false, 00:23:23.575 "reset": true, 00:23:23.575 "nvme_admin": false, 00:23:23.575 "nvme_io": false, 00:23:23.575 "nvme_io_md": false, 00:23:23.575 "write_zeroes": true, 00:23:23.575 "zcopy": false, 00:23:23.575 "get_zone_info": false, 00:23:23.575 "zone_management": false, 00:23:23.575 "zone_append": false, 00:23:23.575 "compare": false, 00:23:23.575 "compare_and_write": false, 00:23:23.575 "abort": false, 00:23:23.575 "seek_hole": true, 00:23:23.575 "seek_data": true, 00:23:23.575 "copy": false, 00:23:23.575 "nvme_iov_md": false 00:23:23.575 }, 00:23:23.575 "driver_specific": { 00:23:23.575 "lvol": { 00:23:23.575 "lvol_store_uuid": "5be85932-ebc9-45a3-975b-297f666c3472", 00:23:23.575 "base_bdev": "nvme0n1", 00:23:23.575 "thin_provision": true, 00:23:23.575 "num_allocated_clusters": 0, 00:23:23.575 "snapshot": false, 00:23:23.575 "clone": false, 00:23:23.575 "esnap_clone": false 00:23:23.575 } 00:23:23.575 } 00:23:23.575 } 00:23:23.575 ]' 00:23:23.575 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9901608d-c1c2-4032-9881-1b9d0595837d --l2p_dram_limit 10' 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:23.836 12:06:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9901608d-c1c2-4032-9881-1b9d0595837d --l2p_dram_limit 10 -c nvc0n1p0 00:23:23.836 [2024-11-18 12:06:21.500355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.500393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:23.836 [2024-11-18 12:06:21.500407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:23.836 [2024-11-18 12:06:21.500413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.500458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.500465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:23.836 [2024-11-18 12:06:21.500474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:23.836 [2024-11-18 12:06:21.500479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.500498] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:23.836 [2024-11-18 12:06:21.501095] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:23.836 [2024-11-18 12:06:21.501147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.501153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:23.836 [2024-11-18 12:06:21.501161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:23:23.836 [2024-11-18 12:06:21.501167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.501235] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0494e82a-5272-4595-8f59-cd2d8a7eb200 00:23:23.836 [2024-11-18 12:06:21.502171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.502194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:23.836 [2024-11-18 12:06:21.502202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:23.836 [2024-11-18 12:06:21.502209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.506837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.506869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:23.836 [2024-11-18 12:06:21.506877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:23:23.836 [2024-11-18 12:06:21.506884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.506950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.506958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:23.836 [2024-11-18 12:06:21.506964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:23.836 [2024-11-18 12:06:21.506974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.507013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.507022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:23.836 [2024-11-18 12:06:21.507028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:23.836 [2024-11-18 12:06:21.507037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.507054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:23.836 [2024-11-18 12:06:21.509938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.509961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:23.836 [2024-11-18 12:06:21.509971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:23:23.836 [2024-11-18 12:06:21.509977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.510003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.510010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:23.836 [2024-11-18 12:06:21.510017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:23.836 [2024-11-18 12:06:21.510022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.510037] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:23.836 [2024-11-18 12:06:21.510141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:23.836 [2024-11-18 12:06:21.510152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:23.836 [2024-11-18 12:06:21.510160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:23.836 [2024-11-18 12:06:21.510169] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:23.836 [2024-11-18 12:06:21.510175] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:23.836 [2024-11-18 12:06:21.510183] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:23.836 [2024-11-18 12:06:21.510188] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:23.836 [2024-11-18 12:06:21.510197] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:23.836 [2024-11-18 12:06:21.510202] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:23.836 [2024-11-18 12:06:21.510209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.510214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:23.836 [2024-11-18 12:06:21.510222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:23:23.836 [2024-11-18 12:06:21.510231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.510297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.836 [2024-11-18 12:06:21.510303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:23.836 [2024-11-18 12:06:21.510309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:23.836 [2024-11-18 12:06:21.510315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.836 [2024-11-18 12:06:21.510390] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:23.836 [2024-11-18 12:06:21.510397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:23.836 [2024-11-18 12:06:21.510404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:23.836 [2024-11-18 12:06:21.510410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.836 [2024-11-18 12:06:21.510416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:23.836 [2024-11-18 12:06:21.510421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:23.836 [2024-11-18 12:06:21.510428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:23.836 [2024-11-18 12:06:21.510432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:23.836 [2024-11-18 12:06:21.510439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:23.836 [2024-11-18 12:06:21.510444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:23.836 [2024-11-18 12:06:21.510450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:23.836 [2024-11-18 12:06:21.510456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:23.836 [2024-11-18 12:06:21.510462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:23.836 [2024-11-18 12:06:21.510468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:23.836 [2024-11-18 12:06:21.510475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:23.836 [2024-11-18 12:06:21.510480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.836 [2024-11-18 12:06:21.510488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:23.836 [2024-11-18 12:06:21.510493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:23.836 [2024-11-18 12:06:21.510499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.836 [2024-11-18 12:06:21.510504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:23.836 [2024-11-18 12:06:21.510511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:23.837 [2024-11-18 12:06:21.510516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:23.837 [2024-11-18 12:06:21.510522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:23.837 [2024-11-18 12:06:21.510526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:23.837 [2024-11-18 12:06:21.510532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:23.837 [2024-11-18 12:06:21.510537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:23.837 [2024-11-18 12:06:21.510544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:23.837 [2024-11-18 12:06:21.510549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:23.837 [2024-11-18 12:06:21.510555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:23.837 [2024-11-18 12:06:21.510560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:23.837 [2024-11-18 12:06:21.510566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:23.837 [2024-11-18 12:06:21.510571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:23.837 [2024-11-18 12:06:21.510579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:23.837 [2024-11-18 12:06:21.510602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:23.837 [2024-11-18 12:06:21.510609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:23.837 [2024-11-18 12:06:21.510614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:23.837 [2024-11-18 12:06:21.510620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:23.837 [2024-11-18 12:06:21.510626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:23.837 [2024-11-18 12:06:21.510632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:23.837 [2024-11-18 12:06:21.510637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.837 [2024-11-18 12:06:21.510643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:23.837 [2024-11-18 12:06:21.510648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:23.837 [2024-11-18 12:06:21.510654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.837 [2024-11-18 12:06:21.510659] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:23.837 [2024-11-18 12:06:21.510666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:23.837 [2024-11-18 12:06:21.510673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:23.837 [2024-11-18 12:06:21.510680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:23.837 [2024-11-18 12:06:21.510687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:23.837 [2024-11-18 12:06:21.510697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:23.837 [2024-11-18 12:06:21.510702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:23.837 [2024-11-18 12:06:21.510709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:23.837 [2024-11-18 12:06:21.510714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:23.837 [2024-11-18 12:06:21.510720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:23.837 [2024-11-18 12:06:21.510729] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:23.837 [2024-11-18 12:06:21.510737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:23.837 [2024-11-18 12:06:21.510745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:23.837 [2024-11-18 12:06:21.510752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:23.837 [2024-11-18 12:06:21.510757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:23.837 [2024-11-18 12:06:21.510764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:23.837 [2024-11-18 12:06:21.510769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:23.837 [2024-11-18 12:06:21.510776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:23.837 [2024-11-18 12:06:21.510781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:23.837 [2024-11-18 12:06:21.510788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:23.837 [2024-11-18 12:06:21.510798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:23.837 [2024-11-18 12:06:21.510806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:23.837 [2024-11-18 12:06:21.510811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:23.837 [2024-11-18 12:06:21.510817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:23.837 [2024-11-18 12:06:21.510823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:23.837 [2024-11-18 12:06:21.510829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:23.837 [2024-11-18 12:06:21.510835] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:23.837 [2024-11-18 12:06:21.510843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:23.837 [2024-11-18 12:06:21.510849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:23.837 [2024-11-18 12:06:21.510856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:23.837 [2024-11-18 12:06:21.510861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:23.837 [2024-11-18 12:06:21.510868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:23.837 [2024-11-18 12:06:21.510873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.837 [2024-11-18 12:06:21.510880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:23.837 [2024-11-18 12:06:21.510886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:23:23.837 [2024-11-18 12:06:21.510893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.837 [2024-11-18 12:06:21.510934] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:23.837 [2024-11-18 12:06:21.510945] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:28.043 [2024-11-18 12:06:25.405532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.405817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:28.043 [2024-11-18 12:06:25.405848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3894.583 ms 00:23:28.043 [2024-11-18 12:06:25.405860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.438430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.438498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.043 [2024-11-18 12:06:25.438513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.319 ms 00:23:28.043 [2024-11-18 12:06:25.438524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.438691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.438708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:28.043 [2024-11-18 12:06:25.438718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:28.043 [2024-11-18 12:06:25.438735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.474719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.474774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.043 [2024-11-18 12:06:25.474786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.944 ms 00:23:28.043 [2024-11-18 12:06:25.474797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.474831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.474847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.043 [2024-11-18 12:06:25.474856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:28.043 [2024-11-18 12:06:25.474867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.475505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.475553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.043 [2024-11-18 12:06:25.475565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:23:28.043 [2024-11-18 12:06:25.475576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.475721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.475733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.043 [2024-11-18 12:06:25.475747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:28.043 [2024-11-18 12:06:25.475760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.493577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.493644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.043 [2024-11-18 12:06:25.493656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.797 ms 00:23:28.043 [2024-11-18 12:06:25.493666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.508101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:28.043 [2024-11-18 12:06:25.512011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.512056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:28.043 [2024-11-18 12:06:25.512070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.233 ms 00:23:28.043 [2024-11-18 12:06:25.512079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.624852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.625143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:28.043 [2024-11-18 12:06:25.625184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.728 ms 00:23:28.043 [2024-11-18 12:06:25.625195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.625442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.625463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:28.043 [2024-11-18 12:06:25.625481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:23:28.043 [2024-11-18 12:06:25.625490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.652566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.652658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:28.043 [2024-11-18 12:06:25.652679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.003 ms 00:23:28.043 [2024-11-18 12:06:25.652689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.676299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.676434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:28.043 [2024-11-18 12:06:25.676456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.544 ms 00:23:28.043 [2024-11-18 12:06:25.676464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.043 [2024-11-18 12:06:25.677065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.043 [2024-11-18 12:06:25.677085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:28.043 [2024-11-18 12:06:25.677098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:23:28.043 [2024-11-18 12:06:25.677108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.304 [2024-11-18 12:06:25.750069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.304 [2024-11-18 12:06:25.750108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:28.304 [2024-11-18 12:06:25.750125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.923 ms 00:23:28.304 [2024-11-18 12:06:25.750134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.304 [2024-11-18 12:06:25.776479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.304 [2024-11-18 12:06:25.776520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:28.304 [2024-11-18 12:06:25.776535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.262 ms 00:23:28.304 [2024-11-18 12:06:25.776543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.304 [2024-11-18 12:06:25.800751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.304 [2024-11-18 12:06:25.800790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:28.304 [2024-11-18 12:06:25.800804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.146 ms 00:23:28.304 [2024-11-18 12:06:25.800811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.304 [2024-11-18 12:06:25.832676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.304 [2024-11-18 12:06:25.832730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:28.304 [2024-11-18 12:06:25.832746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.817 ms 00:23:28.304 [2024-11-18 12:06:25.832755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.304 [2024-11-18 12:06:25.832806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.304 [2024-11-18 12:06:25.832815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:28.304 [2024-11-18 12:06:25.832830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:28.304 [2024-11-18 12:06:25.832837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.304 [2024-11-18 12:06:25.832924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.304 [2024-11-18 12:06:25.832935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:28.304 [2024-11-18 12:06:25.832947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:28.304 [2024-11-18 12:06:25.832955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.304 [2024-11-18 12:06:25.833936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4333.084 ms, result 0 00:23:28.304 { 00:23:28.304 "name": "ftl0", 00:23:28.304 "uuid": "0494e82a-5272-4595-8f59-cd2d8a7eb200" 00:23:28.304 } 00:23:28.304 12:06:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:28.304 12:06:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:28.566 12:06:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:28.566 12:06:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:28.566 12:06:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:28.827 /dev/nbd0 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:28.827 1+0 records in 00:23:28.827 1+0 records out 00:23:28.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598766 s, 6.8 MB/s 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:23:28.827 12:06:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:28.827 [2024-11-18 12:06:26.419519] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:23:28.827 [2024-11-18 12:06:26.419691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77732 ] 00:23:29.088 [2024-11-18 12:06:26.586685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.088 [2024-11-18 12:06:26.710772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.476  [2024-11-18T12:06:29.120Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-18T12:06:30.064Z] Copying: 373/1024 [MB] (186 MBps) [2024-11-18T12:06:31.000Z] Copying: 610/1024 [MB] (237 MBps) [2024-11-18T12:06:32.053Z] Copying: 854/1024 [MB] (243 MBps) [2024-11-18T12:06:32.311Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:23:34.610 00:23:34.610 12:06:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:37.139 12:06:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:37.139 [2024-11-18 12:06:34.389855] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:23:37.139 [2024-11-18 12:06:34.389969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77821 ] 00:23:37.139 [2024-11-18 12:06:34.545170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.139 [2024-11-18 12:06:34.631985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.516  [2024-11-18T12:06:37.161Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-18T12:06:38.102Z] Copying: 52/1024 [MB] (17 MBps) [2024-11-18T12:06:39.045Z] Copying: 62/1024 [MB] (10 MBps) [2024-11-18T12:06:39.992Z] Copying: 73120/1048576 [kB] (9208 kBps) [2024-11-18T12:06:40.934Z] Copying: 82/1024 [MB] (11 MBps) [2024-11-18T12:06:41.875Z] Copying: 95/1024 [MB] (12 MBps) [2024-11-18T12:06:43.261Z] Copying: 112/1024 [MB] (16 MBps) [2024-11-18T12:06:43.834Z] Copying: 130/1024 [MB] (18 MBps) [2024-11-18T12:06:45.219Z] Copying: 147/1024 [MB] (17 MBps) [2024-11-18T12:06:46.163Z] Copying: 164/1024 [MB] (16 MBps) [2024-11-18T12:06:47.107Z] Copying: 181/1024 [MB] (16 MBps) [2024-11-18T12:06:48.049Z] Copying: 198/1024 [MB] (17 MBps) [2024-11-18T12:06:48.995Z] Copying: 215/1024 [MB] (16 MBps) [2024-11-18T12:06:49.934Z] Copying: 230/1024 [MB] (14 MBps) [2024-11-18T12:06:50.877Z] Copying: 247/1024 [MB] (17 MBps) [2024-11-18T12:06:52.267Z] Copying: 262/1024 [MB] (14 MBps) [2024-11-18T12:06:52.841Z] Copying: 278456/1048576 [kB] (10128 kBps) [2024-11-18T12:06:54.227Z] Copying: 283/1024 [MB] (11 MBps) [2024-11-18T12:06:55.171Z] Copying: 294/1024 [MB] (10 MBps) [2024-11-18T12:06:56.114Z] Copying: 305/1024 [MB] (11 MBps) [2024-11-18T12:06:57.056Z] Copying: 322952/1048576 [kB] (9864 kBps) [2024-11-18T12:06:57.998Z] Copying: 332856/1048576 [kB] (9904 kBps) [2024-11-18T12:06:58.936Z] Copying: 342824/1048576 [kB] (9968 kBps) [2024-11-18T12:06:59.880Z] Copying: 364/1024 [MB] (29 MBps) [2024-11-18T12:07:00.826Z] Copying: 377/1024 [MB] (13 MBps) [2024-11-18T12:07:02.214Z] Copying: 390/1024 [MB] (13 MBps) [2024-11-18T12:07:03.158Z] Copying: 410376/1048576 [kB] (10124 kBps) [2024-11-18T12:07:04.103Z] Copying: 412/1024 [MB] (12 MBps) [2024-11-18T12:07:05.047Z] Copying: 426/1024 [MB] (14 MBps) [2024-11-18T12:07:05.992Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-18T12:07:06.930Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-18T12:07:07.865Z] Copying: 473/1024 [MB] (25 MBps) [2024-11-18T12:07:09.251Z] Copying: 506/1024 [MB] (33 MBps) [2024-11-18T12:07:10.193Z] Copying: 522/1024 [MB] (15 MBps) [2024-11-18T12:07:11.137Z] Copying: 538/1024 [MB] (16 MBps) [2024-11-18T12:07:12.077Z] Copying: 552/1024 [MB] (13 MBps) [2024-11-18T12:07:13.013Z] Copying: 568/1024 [MB] (15 MBps) [2024-11-18T12:07:13.956Z] Copying: 601/1024 [MB] (33 MBps) [2024-11-18T12:07:14.900Z] Copying: 615/1024 [MB] (13 MBps) [2024-11-18T12:07:15.843Z] Copying: 628/1024 [MB] (13 MBps) [2024-11-18T12:07:17.221Z] Copying: 639/1024 [MB] (10 MBps) [2024-11-18T12:07:18.155Z] Copying: 664/1024 [MB] (24 MBps) [2024-11-18T12:07:19.091Z] Copying: 699/1024 [MB] (35 MBps) [2024-11-18T12:07:20.111Z] Copying: 734/1024 [MB] (34 MBps) [2024-11-18T12:07:21.076Z] Copying: 748/1024 [MB] (14 MBps) [2024-11-18T12:07:22.010Z] Copying: 764/1024 [MB] (15 MBps) [2024-11-18T12:07:22.943Z] Copying: 789/1024 [MB] (25 MBps) [2024-11-18T12:07:23.880Z] Copying: 825/1024 [MB] (35 MBps) [2024-11-18T12:07:24.825Z] Copying: 860/1024 [MB] (34 MBps) [2024-11-18T12:07:26.212Z] Copying: 870/1024 [MB] (10 MBps) [2024-11-18T12:07:27.153Z] Copying: 886/1024 [MB] (15 MBps) [2024-11-18T12:07:28.092Z] Copying: 900/1024 [MB] (13 MBps) [2024-11-18T12:07:29.026Z] Copying: 921/1024 [MB] (21 MBps) [2024-11-18T12:07:29.966Z] Copying: 955/1024 [MB] (34 MBps) [2024-11-18T12:07:30.907Z] Copying: 976/1024 [MB] (20 MBps) [2024-11-18T12:07:31.840Z] Copying: 989/1024 [MB] (13 MBps) [2024-11-18T12:07:32.412Z] Copying: 1018/1024 [MB] (28 MBps) [2024-11-18T12:07:32.983Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:24:35.282 00:24:35.282 12:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:35.282 12:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:35.282 12:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:35.543 [2024-11-18 12:07:33.201970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.543 [2024-11-18 12:07:33.202019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:35.543 [2024-11-18 12:07:33.202033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:35.543 [2024-11-18 12:07:33.202043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.543 [2024-11-18 12:07:33.202069] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:35.543 [2024-11-18 12:07:33.204707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.543 [2024-11-18 12:07:33.204735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:35.543 [2024-11-18 12:07:33.204748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:24:35.543 [2024-11-18 12:07:33.204757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.543 [2024-11-18 12:07:33.206975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.543 [2024-11-18 12:07:33.207007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:35.543 [2024-11-18 12:07:33.207018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:24:35.543 [2024-11-18 12:07:33.207026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.543 [2024-11-18 12:07:33.223244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.543 [2024-11-18 12:07:33.223279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:35.543 [2024-11-18 12:07:33.223292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.197 ms 00:24:35.543 [2024-11-18 12:07:33.223300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.543 [2024-11-18 12:07:33.229519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.543 [2024-11-18 12:07:33.229547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:35.543 [2024-11-18 12:07:33.229559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:24:35.543 [2024-11-18 12:07:33.229568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.806 [2024-11-18 12:07:33.254607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.806 [2024-11-18 12:07:33.254643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:35.806 [2024-11-18 12:07:33.254655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.964 ms 00:24:35.806 [2024-11-18 12:07:33.254663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.806 [2024-11-18 12:07:33.270276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.806 [2024-11-18 12:07:33.270311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:35.806 [2024-11-18 12:07:33.270324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.572 ms 00:24:35.806 [2024-11-18 12:07:33.270334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.806 [2024-11-18 12:07:33.270481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.806 [2024-11-18 12:07:33.270492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:35.806 [2024-11-18 12:07:33.270503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:35.806 [2024-11-18 12:07:33.270510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.806 [2024-11-18 12:07:33.293980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.806 [2024-11-18 12:07:33.294011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:35.806 [2024-11-18 12:07:33.294023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.447 ms 00:24:35.806 [2024-11-18 12:07:33.294030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.806 [2024-11-18 12:07:33.316811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.806 [2024-11-18 12:07:33.316846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:35.806 [2024-11-18 12:07:33.316858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.740 ms 00:24:35.806 [2024-11-18 12:07:33.316866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.806 [2024-11-18 12:07:33.339614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.806 [2024-11-18 12:07:33.339652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:35.806 [2024-11-18 12:07:33.339665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.707 ms 00:24:35.806 [2024-11-18 12:07:33.339672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.806 [2024-11-18 12:07:33.362675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.806 [2024-11-18 12:07:33.362711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:35.806 [2024-11-18 12:07:33.362724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.923 ms 00:24:35.806 [2024-11-18 12:07:33.362730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.806 [2024-11-18 12:07:33.362770] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:35.806 [2024-11-18 12:07:33.362784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.362996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.363004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.363012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:35.806 [2024-11-18 12:07:33.363021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:35.807 [2024-11-18 12:07:33.363684] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:35.807 [2024-11-18 12:07:33.363694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0494e82a-5272-4595-8f59-cd2d8a7eb200 00:24:35.807 [2024-11-18 12:07:33.363702] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:35.807 [2024-11-18 12:07:33.363713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:35.807 [2024-11-18 12:07:33.363721] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:35.808 [2024-11-18 12:07:33.363733] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:35.808 [2024-11-18 12:07:33.363740] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:35.808 [2024-11-18 12:07:33.363750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:35.808 [2024-11-18 12:07:33.363757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:35.808 [2024-11-18 12:07:33.363765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:35.808 [2024-11-18 12:07:33.363771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:35.808 [2024-11-18 12:07:33.363780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.808 [2024-11-18 12:07:33.363788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:35.808 [2024-11-18 12:07:33.363799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:24:35.808 [2024-11-18 12:07:33.363807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.808 [2024-11-18 12:07:33.376715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.808 [2024-11-18 12:07:33.376751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:35.808 [2024-11-18 12:07:33.376762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.870 ms 00:24:35.808 [2024-11-18 12:07:33.376770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.808 [2024-11-18 12:07:33.377148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.808 [2024-11-18 12:07:33.377163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:35.808 [2024-11-18 12:07:33.377174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:24:35.808 [2024-11-18 12:07:33.377181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.808 [2024-11-18 12:07:33.421943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:35.808 [2024-11-18 12:07:33.421993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:35.808 [2024-11-18 12:07:33.422007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:35.808 [2024-11-18 12:07:33.422015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.808 [2024-11-18 12:07:33.422087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:35.808 [2024-11-18 12:07:33.422096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:35.808 [2024-11-18 12:07:33.422107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:35.808 [2024-11-18 12:07:33.422115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.808 [2024-11-18 12:07:33.422217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:35.808 [2024-11-18 12:07:33.422231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:35.808 [2024-11-18 12:07:33.422241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:35.808 [2024-11-18 12:07:33.422249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.808 [2024-11-18 12:07:33.422272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:35.808 [2024-11-18 12:07:33.422280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:35.808 [2024-11-18 12:07:33.422290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:35.808 [2024-11-18 12:07:33.422297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.508237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.069 [2024-11-18 12:07:33.508301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.069 [2024-11-18 12:07:33.508317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.069 [2024-11-18 12:07:33.508326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.578340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.069 [2024-11-18 12:07:33.578398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.069 [2024-11-18 12:07:33.578415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.069 [2024-11-18 12:07:33.578424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.578520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.069 [2024-11-18 12:07:33.578532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.069 [2024-11-18 12:07:33.578543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.069 [2024-11-18 12:07:33.578555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.578652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.069 [2024-11-18 12:07:33.578664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.069 [2024-11-18 12:07:33.578676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.069 [2024-11-18 12:07:33.578684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.578788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.069 [2024-11-18 12:07:33.578798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.069 [2024-11-18 12:07:33.578809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.069 [2024-11-18 12:07:33.578819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.578856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.069 [2024-11-18 12:07:33.578867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:36.069 [2024-11-18 12:07:33.578877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.069 [2024-11-18 12:07:33.578886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.578934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.069 [2024-11-18 12:07:33.578943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.069 [2024-11-18 12:07:33.578954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.069 [2024-11-18 12:07:33.578962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.579018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.069 [2024-11-18 12:07:33.579051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.069 [2024-11-18 12:07:33.579062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.069 [2024-11-18 12:07:33.579070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.069 [2024-11-18 12:07:33.579226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.204 ms, result 0 00:24:36.069 true 00:24:36.070 12:07:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77590 00:24:36.070 12:07:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77590 00:24:36.070 12:07:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:36.070 [2024-11-18 12:07:33.676036] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:24:36.070 [2024-11-18 12:07:33.676185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78438 ] 00:24:36.330 [2024-11-18 12:07:33.838891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.330 [2024-11-18 12:07:33.957487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.714  [2024-11-18T12:07:36.349Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-18T12:07:37.284Z] Copying: 442/1024 [MB] (253 MBps) [2024-11-18T12:07:38.218Z] Copying: 699/1024 [MB] (256 MBps) [2024-11-18T12:07:38.783Z] Copying: 951/1024 [MB] (252 MBps) [2024-11-18T12:07:39.041Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:24:41.340 00:24:41.605 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77590 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:41.605 12:07:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:41.605 [2024-11-18 12:07:39.127458] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:24:41.605 [2024-11-18 12:07:39.127601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78494 ] 00:24:41.605 [2024-11-18 12:07:39.278170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.864 [2024-11-18 12:07:39.364427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.121 [2024-11-18 12:07:39.570806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:42.121 [2024-11-18 12:07:39.570855] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:42.121 [2024-11-18 12:07:39.633484] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:42.121 [2024-11-18 12:07:39.633759] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:42.121 [2024-11-18 12:07:39.634401] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:42.121 [2024-11-18 12:07:39.812488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.121 [2024-11-18 12:07:39.812524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:42.121 [2024-11-18 12:07:39.812534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:42.121 [2024-11-18 12:07:39.812540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.121 [2024-11-18 12:07:39.812596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.121 [2024-11-18 12:07:39.812605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:42.121 [2024-11-18 12:07:39.812611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:42.121 [2024-11-18 12:07:39.812617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.121 [2024-11-18 12:07:39.812629] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:42.122 [2024-11-18 12:07:39.813177] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:42.122 [2024-11-18 12:07:39.813195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.122 [2024-11-18 12:07:39.813201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:42.122 [2024-11-18 12:07:39.813207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:42.122 [2024-11-18 12:07:39.813213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.122 [2024-11-18 12:07:39.814133] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:42.381 [2024-11-18 12:07:39.823646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.823677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:42.381 [2024-11-18 12:07:39.823686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.513 ms 00:24:42.381 [2024-11-18 12:07:39.823692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.823733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.823740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:42.381 [2024-11-18 12:07:39.823746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:42.381 [2024-11-18 12:07:39.823752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.828090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.828115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:42.381 [2024-11-18 12:07:39.828122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.296 ms 00:24:42.381 [2024-11-18 12:07:39.828128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.828178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.828185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:42.381 [2024-11-18 12:07:39.828191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:42.381 [2024-11-18 12:07:39.828196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.828228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.828238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:42.381 [2024-11-18 12:07:39.828244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:42.381 [2024-11-18 12:07:39.828249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.828263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:42.381 [2024-11-18 12:07:39.830808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.830831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:42.381 [2024-11-18 12:07:39.830838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:24:42.381 [2024-11-18 12:07:39.830844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.830868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.830875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:42.381 [2024-11-18 12:07:39.830881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:42.381 [2024-11-18 12:07:39.830886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.830900] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:42.381 [2024-11-18 12:07:39.830915] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:42.381 [2024-11-18 12:07:39.830941] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:42.381 [2024-11-18 12:07:39.830952] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:42.381 [2024-11-18 12:07:39.831030] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:42.381 [2024-11-18 12:07:39.831043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:42.381 [2024-11-18 12:07:39.831051] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:42.381 [2024-11-18 12:07:39.831059] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:42.381 [2024-11-18 12:07:39.831067] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:42.381 [2024-11-18 12:07:39.831074] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:42.381 [2024-11-18 12:07:39.831079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:42.381 [2024-11-18 12:07:39.831085] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:42.381 [2024-11-18 12:07:39.831092] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:42.381 [2024-11-18 12:07:39.831097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.831103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:42.381 [2024-11-18 12:07:39.831108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:24:42.381 [2024-11-18 12:07:39.831113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.831176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.381 [2024-11-18 12:07:39.831183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:42.381 [2024-11-18 12:07:39.831189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:42.381 [2024-11-18 12:07:39.831194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.381 [2024-11-18 12:07:39.831270] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:42.381 [2024-11-18 12:07:39.831277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:42.381 [2024-11-18 12:07:39.831284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:42.381 [2024-11-18 12:07:39.831289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.381 [2024-11-18 12:07:39.831295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:42.381 [2024-11-18 12:07:39.831300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:42.381 [2024-11-18 12:07:39.831306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:42.381 [2024-11-18 12:07:39.831311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:42.381 [2024-11-18 12:07:39.831316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:42.381 [2024-11-18 12:07:39.831321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:42.381 [2024-11-18 12:07:39.831326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:42.381 [2024-11-18 12:07:39.831335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:42.381 [2024-11-18 12:07:39.831340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:42.381 [2024-11-18 12:07:39.831345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:42.381 [2024-11-18 12:07:39.831351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:42.381 [2024-11-18 12:07:39.831357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.381 [2024-11-18 12:07:39.831362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:42.381 [2024-11-18 12:07:39.831368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:42.382 [2024-11-18 12:07:39.831373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:42.382 [2024-11-18 12:07:39.831383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.382 [2024-11-18 12:07:39.831393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:42.382 [2024-11-18 12:07:39.831398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.382 [2024-11-18 12:07:39.831407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:42.382 [2024-11-18 12:07:39.831413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.382 [2024-11-18 12:07:39.831422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:42.382 [2024-11-18 12:07:39.831427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.382 [2024-11-18 12:07:39.831436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:42.382 [2024-11-18 12:07:39.831441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:42.382 [2024-11-18 12:07:39.831450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:42.382 [2024-11-18 12:07:39.831455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:42.382 [2024-11-18 12:07:39.831460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:42.382 [2024-11-18 12:07:39.831465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:42.382 [2024-11-18 12:07:39.831470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:42.382 [2024-11-18 12:07:39.831490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:42.382 [2024-11-18 12:07:39.831500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:42.382 [2024-11-18 12:07:39.831505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831509] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:42.382 [2024-11-18 12:07:39.831515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:42.382 [2024-11-18 12:07:39.831521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:42.382 [2024-11-18 12:07:39.831529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.382 [2024-11-18 12:07:39.831535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:42.382 [2024-11-18 12:07:39.831540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:42.382 [2024-11-18 12:07:39.831545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:42.382 [2024-11-18 12:07:39.831551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:42.382 [2024-11-18 12:07:39.831556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:42.382 [2024-11-18 12:07:39.831562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:42.382 [2024-11-18 12:07:39.831568] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:42.382 [2024-11-18 12:07:39.831575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:42.382 [2024-11-18 12:07:39.831592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:42.382 [2024-11-18 12:07:39.831598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:42.382 [2024-11-18 12:07:39.831603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:42.382 [2024-11-18 12:07:39.831609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:42.382 [2024-11-18 12:07:39.831614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:42.382 [2024-11-18 12:07:39.831619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:42.382 [2024-11-18 12:07:39.831625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:42.382 [2024-11-18 12:07:39.831630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:42.382 [2024-11-18 12:07:39.831635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:42.382 [2024-11-18 12:07:39.831641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:42.382 [2024-11-18 12:07:39.831646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:42.382 [2024-11-18 12:07:39.831651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:42.382 [2024-11-18 12:07:39.831657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:42.382 [2024-11-18 12:07:39.831663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:42.382 [2024-11-18 12:07:39.831668] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:42.382 [2024-11-18 12:07:39.831674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:42.382 [2024-11-18 12:07:39.831679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:42.382 [2024-11-18 12:07:39.831685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:42.382 [2024-11-18 12:07:39.831690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:42.382 [2024-11-18 12:07:39.831696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:42.382 [2024-11-18 12:07:39.831701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.382 [2024-11-18 12:07:39.831707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:42.382 [2024-11-18 12:07:39.831712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:24:42.382 [2024-11-18 12:07:39.831718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.382 [2024-11-18 12:07:39.852436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.382 [2024-11-18 12:07:39.852466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:42.382 [2024-11-18 12:07:39.852475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.686 ms 00:24:42.382 [2024-11-18 12:07:39.852481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.382 [2024-11-18 12:07:39.852546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.382 [2024-11-18 12:07:39.852555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:42.382 [2024-11-18 12:07:39.852561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:42.382 [2024-11-18 12:07:39.852567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.382 [2024-11-18 12:07:39.893135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.382 [2024-11-18 12:07:39.893172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:42.382 [2024-11-18 12:07:39.893182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.514 ms 00:24:42.382 [2024-11-18 12:07:39.893191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.382 [2024-11-18 12:07:39.893232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.382 [2024-11-18 12:07:39.893240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:42.382 [2024-11-18 12:07:39.893247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:42.382 [2024-11-18 12:07:39.893253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.382 [2024-11-18 12:07:39.893575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.382 [2024-11-18 12:07:39.893610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:42.382 [2024-11-18 12:07:39.893618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:24:42.382 [2024-11-18 12:07:39.893624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.382 [2024-11-18 12:07:39.893727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:39.893740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:42.383 [2024-11-18 12:07:39.893746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:42.383 [2024-11-18 12:07:39.893752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:39.904094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:39.904120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:42.383 [2024-11-18 12:07:39.904127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.326 ms 00:24:42.383 [2024-11-18 12:07:39.904133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:39.913785] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:42.383 [2024-11-18 12:07:39.913810] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:42.383 [2024-11-18 12:07:39.913819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:39.913826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:42.383 [2024-11-18 12:07:39.913833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.614 ms 00:24:42.383 [2024-11-18 12:07:39.913838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:39.932249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:39.932276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:42.383 [2024-11-18 12:07:39.932292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.379 ms 00:24:42.383 [2024-11-18 12:07:39.932298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:39.941065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:39.941091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:42.383 [2024-11-18 12:07:39.941099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.728 ms 00:24:42.383 [2024-11-18 12:07:39.941104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:39.949605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:39.949630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:42.383 [2024-11-18 12:07:39.949637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.474 ms 00:24:42.383 [2024-11-18 12:07:39.949643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:39.950106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:39.950127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:42.383 [2024-11-18 12:07:39.950134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:24:42.383 [2024-11-18 12:07:39.950140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:39.993477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:39.993523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:42.383 [2024-11-18 12:07:39.993533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.322 ms 00:24:42.383 [2024-11-18 12:07:39.993541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:40.001491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:42.383 [2024-11-18 12:07:40.003519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:40.003540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:42.383 [2024-11-18 12:07:40.003549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.925 ms 00:24:42.383 [2024-11-18 12:07:40.003557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:40.003641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:40.003651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:42.383 [2024-11-18 12:07:40.003659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:42.383 [2024-11-18 12:07:40.003666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:40.003719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:40.003771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:42.383 [2024-11-18 12:07:40.003778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:42.383 [2024-11-18 12:07:40.003784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:40.003800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:40.003808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:42.383 [2024-11-18 12:07:40.003815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:42.383 [2024-11-18 12:07:40.003821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:40.003845] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:42.383 [2024-11-18 12:07:40.003853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:40.003859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:42.383 [2024-11-18 12:07:40.003865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:42.383 [2024-11-18 12:07:40.003871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:40.021568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:40.021602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:42.383 [2024-11-18 12:07:40.021612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.680 ms 00:24:42.383 [2024-11-18 12:07:40.021619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:40.021965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.383 [2024-11-18 12:07:40.021997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:42.383 [2024-11-18 12:07:40.022006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:42.383 [2024-11-18 12:07:40.022013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.383 [2024-11-18 12:07:40.022839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.994 ms, result 0 00:24:43.755  [2024-11-18T12:07:42.390Z] Copying: 54/1024 [MB] (54 MBps) [2024-11-18T12:07:43.334Z] Copying: 107/1024 [MB] (52 MBps) [2024-11-18T12:07:44.278Z] Copying: 131/1024 [MB] (24 MBps) [2024-11-18T12:07:45.220Z] Copying: 146/1024 [MB] (15 MBps) [2024-11-18T12:07:46.159Z] Copying: 166/1024 [MB] (19 MBps) [2024-11-18T12:07:47.102Z] Copying: 182/1024 [MB] (16 MBps) [2024-11-18T12:07:48.073Z] Copying: 196/1024 [MB] (14 MBps) [2024-11-18T12:07:49.472Z] Copying: 206/1024 [MB] (10 MBps) [2024-11-18T12:07:50.043Z] Copying: 219/1024 [MB] (12 MBps) [2024-11-18T12:07:51.430Z] Copying: 232/1024 [MB] (12 MBps) [2024-11-18T12:07:52.373Z] Copying: 248/1024 [MB] (16 MBps) [2024-11-18T12:07:53.315Z] Copying: 261/1024 [MB] (12 MBps) [2024-11-18T12:07:54.259Z] Copying: 271/1024 [MB] (10 MBps) [2024-11-18T12:07:55.201Z] Copying: 283/1024 [MB] (12 MBps) [2024-11-18T12:07:56.143Z] Copying: 302/1024 [MB] (18 MBps) [2024-11-18T12:07:57.084Z] Copying: 317/1024 [MB] (15 MBps) [2024-11-18T12:07:58.466Z] Copying: 332/1024 [MB] (14 MBps) [2024-11-18T12:07:59.404Z] Copying: 342/1024 [MB] (10 MBps) [2024-11-18T12:08:00.344Z] Copying: 352/1024 [MB] (10 MBps) [2024-11-18T12:08:01.282Z] Copying: 362/1024 [MB] (10 MBps) [2024-11-18T12:08:02.221Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-18T12:08:03.161Z] Copying: 383/1024 [MB] (10 MBps) [2024-11-18T12:08:04.096Z] Copying: 402824/1048576 [kB] (10184 kBps) [2024-11-18T12:08:05.474Z] Copying: 421/1024 [MB] (27 MBps) [2024-11-18T12:08:06.047Z] Copying: 464/1024 [MB] (43 MBps) [2024-11-18T12:08:07.432Z] Copying: 475/1024 [MB] (10 MBps) [2024-11-18T12:08:08.373Z] Copying: 496664/1048576 [kB] (10232 kBps) [2024-11-18T12:08:09.317Z] Copying: 516/1024 [MB] (31 MBps) [2024-11-18T12:08:10.260Z] Copying: 526/1024 [MB] (10 MBps) [2024-11-18T12:08:11.199Z] Copying: 539/1024 [MB] (12 MBps) [2024-11-18T12:08:12.140Z] Copying: 550/1024 [MB] (11 MBps) [2024-11-18T12:08:13.087Z] Copying: 562/1024 [MB] (12 MBps) [2024-11-18T12:08:14.474Z] Copying: 576/1024 [MB] (13 MBps) [2024-11-18T12:08:15.049Z] Copying: 587/1024 [MB] (10 MBps) [2024-11-18T12:08:16.437Z] Copying: 602/1024 [MB] (14 MBps) [2024-11-18T12:08:17.382Z] Copying: 614/1024 [MB] (11 MBps) [2024-11-18T12:08:18.315Z] Copying: 624/1024 [MB] (10 MBps) [2024-11-18T12:08:19.247Z] Copying: 643/1024 [MB] (19 MBps) [2024-11-18T12:08:20.177Z] Copying: 671/1024 [MB] (27 MBps) [2024-11-18T12:08:21.109Z] Copying: 698/1024 [MB] (27 MBps) [2024-11-18T12:08:22.043Z] Copying: 729/1024 [MB] (31 MBps) [2024-11-18T12:08:23.427Z] Copying: 757/1024 [MB] (27 MBps) [2024-11-18T12:08:24.371Z] Copying: 775/1024 [MB] (18 MBps) [2024-11-18T12:08:25.404Z] Copying: 798/1024 [MB] (22 MBps) [2024-11-18T12:08:26.338Z] Copying: 810/1024 [MB] (12 MBps) [2024-11-18T12:08:27.271Z] Copying: 836/1024 [MB] (26 MBps) [2024-11-18T12:08:28.204Z] Copying: 865/1024 [MB] (28 MBps) [2024-11-18T12:08:29.136Z] Copying: 893/1024 [MB] (28 MBps) [2024-11-18T12:08:30.072Z] Copying: 922/1024 [MB] (28 MBps) [2024-11-18T12:08:31.448Z] Copying: 947/1024 [MB] (24 MBps) [2024-11-18T12:08:32.380Z] Copying: 971/1024 [MB] (24 MBps) [2024-11-18T12:08:33.316Z] Copying: 999/1024 [MB] (27 MBps) [2024-11-18T12:08:33.316Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-18 12:08:33.006788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.006823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:35.615 [2024-11-18 12:08:33.006835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:35.615 [2024-11-18 12:08:33.006841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.615 [2024-11-18 12:08:33.006860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:35.615 [2024-11-18 12:08:33.008964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.008986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:35.615 [2024-11-18 12:08:33.008994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:25:35.615 [2024-11-18 12:08:33.009002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.615 [2024-11-18 12:08:33.010852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.010873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:35.615 [2024-11-18 12:08:33.010881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:25:35.615 [2024-11-18 12:08:33.010887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.615 [2024-11-18 12:08:33.023067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.023095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:35.615 [2024-11-18 12:08:33.023103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:25:35.615 [2024-11-18 12:08:33.023109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.615 [2024-11-18 12:08:33.027945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.027964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:35.615 [2024-11-18 12:08:33.027972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.818 ms 00:25:35.615 [2024-11-18 12:08:33.027979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.615 [2024-11-18 12:08:33.047211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.047234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:35.615 [2024-11-18 12:08:33.047241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.198 ms 00:25:35.615 [2024-11-18 12:08:33.047247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.615 [2024-11-18 12:08:33.058964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.058984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:35.615 [2024-11-18 12:08:33.058996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.691 ms 00:25:35.615 [2024-11-18 12:08:33.059002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.615 [2024-11-18 12:08:33.060770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.060791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:35.615 [2024-11-18 12:08:33.060798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:25:35.615 [2024-11-18 12:08:33.060804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.615 [2024-11-18 12:08:33.078653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.615 [2024-11-18 12:08:33.078674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:35.615 [2024-11-18 12:08:33.078682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.839 ms 00:25:35.615 [2024-11-18 12:08:33.078687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.616 [2024-11-18 12:08:33.096379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.616 [2024-11-18 12:08:33.096400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:35.616 [2024-11-18 12:08:33.096407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.667 ms 00:25:35.616 [2024-11-18 12:08:33.096412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.616 [2024-11-18 12:08:33.113777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.616 [2024-11-18 12:08:33.113797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:35.616 [2024-11-18 12:08:33.113805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.341 ms 00:25:35.616 [2024-11-18 12:08:33.113810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.616 [2024-11-18 12:08:33.130799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.616 [2024-11-18 12:08:33.130819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:35.616 [2024-11-18 12:08:33.130826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.948 ms 00:25:35.616 [2024-11-18 12:08:33.130832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.616 [2024-11-18 12:08:33.130856] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:35.616 [2024-11-18 12:08:33.130865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 768 / 261120 wr_cnt: 1 state: open 00:25:35.616 [2024-11-18 12:08:33.130873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.130997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:35.616 [2024-11-18 12:08:33.131244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:35.617 [2024-11-18 12:08:33.131434] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:35.617 [2024-11-18 12:08:33.131439] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0494e82a-5272-4595-8f59-cd2d8a7eb200 00:25:35.617 [2024-11-18 12:08:33.131445] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 768 00:25:35.617 [2024-11-18 12:08:33.131451] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1728 00:25:35.617 [2024-11-18 12:08:33.131461] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 768 00:25:35.617 [2024-11-18 12:08:33.131467] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.2500 00:25:35.617 [2024-11-18 12:08:33.131472] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:35.617 [2024-11-18 12:08:33.131486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:35.617 [2024-11-18 12:08:33.131494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:35.617 [2024-11-18 12:08:33.131500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:35.617 [2024-11-18 12:08:33.131505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:35.617 [2024-11-18 12:08:33.131510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.617 [2024-11-18 12:08:33.131516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:35.617 [2024-11-18 12:08:33.131523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:25:35.617 [2024-11-18 12:08:33.131528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.140936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.617 [2024-11-18 12:08:33.140955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:35.617 [2024-11-18 12:08:33.140963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.396 ms 00:25:35.617 [2024-11-18 12:08:33.140968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.141231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.617 [2024-11-18 12:08:33.141239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:35.617 [2024-11-18 12:08:33.141245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:25:35.617 [2024-11-18 12:08:33.141250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.166912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.166935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.617 [2024-11-18 12:08:33.166944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.166951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.166991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.166997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.617 [2024-11-18 12:08:33.167003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.167009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.167052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.167059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.617 [2024-11-18 12:08:33.167066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.167071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.167085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.167091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.617 [2024-11-18 12:08:33.167097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.167102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.225166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.225192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.617 [2024-11-18 12:08:33.225200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.225210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.273022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.273051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.617 [2024-11-18 12:08:33.273059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.273066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.273118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.273125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.617 [2024-11-18 12:08:33.273131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.273137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.273166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.273173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.617 [2024-11-18 12:08:33.273179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.273185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.617 [2024-11-18 12:08:33.273250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.617 [2024-11-18 12:08:33.273258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.617 [2024-11-18 12:08:33.273264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.617 [2024-11-18 12:08:33.273270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.618 [2024-11-18 12:08:33.273291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.618 [2024-11-18 12:08:33.273300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.618 [2024-11-18 12:08:33.273307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.618 [2024-11-18 12:08:33.273312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.618 [2024-11-18 12:08:33.273340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.618 [2024-11-18 12:08:33.273346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.618 [2024-11-18 12:08:33.273352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.618 [2024-11-18 12:08:33.273358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.618 [2024-11-18 12:08:33.273391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.618 [2024-11-18 12:08:33.273398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.618 [2024-11-18 12:08:33.273404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.618 [2024-11-18 12:08:33.273410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.618 [2024-11-18 12:08:33.273496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.687 ms, result 0 00:25:36.554 00:25:36.554 00:25:36.554 12:08:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:38.467 12:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:38.467 [2024-11-18 12:08:36.078579] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:25:38.467 [2024-11-18 12:08:36.078686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79080 ] 00:25:38.725 [2024-11-18 12:08:36.229260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.725 [2024-11-18 12:08:36.303257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.984 [2024-11-18 12:08:36.508419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:38.984 [2024-11-18 12:08:36.508463] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:38.985 [2024-11-18 12:08:36.659707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.659738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:38.985 [2024-11-18 12:08:36.659751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:38.985 [2024-11-18 12:08:36.659758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.659792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.659800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.985 [2024-11-18 12:08:36.659809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:38.985 [2024-11-18 12:08:36.659815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.659827] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:38.985 [2024-11-18 12:08:36.660331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:38.985 [2024-11-18 12:08:36.660348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.660355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.985 [2024-11-18 12:08:36.660362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:25:38.985 [2024-11-18 12:08:36.660367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.661266] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:38.985 [2024-11-18 12:08:36.671197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.671220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:38.985 [2024-11-18 12:08:36.671228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.932 ms 00:25:38.985 [2024-11-18 12:08:36.671235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.671279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.671286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:38.985 [2024-11-18 12:08:36.671292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:38.985 [2024-11-18 12:08:36.671298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.675649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.675668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.985 [2024-11-18 12:08:36.675675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:25:38.985 [2024-11-18 12:08:36.675681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.675736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.675743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.985 [2024-11-18 12:08:36.675749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:38.985 [2024-11-18 12:08:36.675755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.675787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.675794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:38.985 [2024-11-18 12:08:36.675801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:38.985 [2024-11-18 12:08:36.675806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.675820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:38.985 [2024-11-18 12:08:36.678391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.678412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.985 [2024-11-18 12:08:36.678419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:25:38.985 [2024-11-18 12:08:36.678427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.678453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.678459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:38.985 [2024-11-18 12:08:36.678466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:38.985 [2024-11-18 12:08:36.678471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.678486] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:38.985 [2024-11-18 12:08:36.678500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:38.985 [2024-11-18 12:08:36.678526] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:38.985 [2024-11-18 12:08:36.678540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:38.985 [2024-11-18 12:08:36.678629] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:38.985 [2024-11-18 12:08:36.678637] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:38.985 [2024-11-18 12:08:36.678645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:38.985 [2024-11-18 12:08:36.678653] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:38.985 [2024-11-18 12:08:36.678660] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:38.985 [2024-11-18 12:08:36.678666] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:38.985 [2024-11-18 12:08:36.678672] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:38.985 [2024-11-18 12:08:36.678679] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:38.985 [2024-11-18 12:08:36.678685] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:38.985 [2024-11-18 12:08:36.678693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.678698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:38.985 [2024-11-18 12:08:36.678704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:25:38.985 [2024-11-18 12:08:36.678710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.678772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.985 [2024-11-18 12:08:36.678779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:38.985 [2024-11-18 12:08:36.678784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:38.985 [2024-11-18 12:08:36.678790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.985 [2024-11-18 12:08:36.678866] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:38.985 [2024-11-18 12:08:36.678879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:38.985 [2024-11-18 12:08:36.678888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:38.985 [2024-11-18 12:08:36.678897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.985 [2024-11-18 12:08:36.678906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:38.985 [2024-11-18 12:08:36.678915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:38.985 [2024-11-18 12:08:36.678924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:38.985 [2024-11-18 12:08:36.678930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:38.985 [2024-11-18 12:08:36.678936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:38.985 [2024-11-18 12:08:36.678942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:38.985 [2024-11-18 12:08:36.678948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:38.985 [2024-11-18 12:08:36.678954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:38.985 [2024-11-18 12:08:36.678959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:38.986 [2024-11-18 12:08:36.678963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:38.986 [2024-11-18 12:08:36.678969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:38.986 [2024-11-18 12:08:36.678979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.986 [2024-11-18 12:08:36.678985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:38.986 [2024-11-18 12:08:36.678990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:38.986 [2024-11-18 12:08:36.678994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.986 [2024-11-18 12:08:36.678999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:38.986 [2024-11-18 12:08:36.679004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:38.986 [2024-11-18 12:08:36.679009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.986 [2024-11-18 12:08:36.679014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:38.986 [2024-11-18 12:08:36.679019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:38.986 [2024-11-18 12:08:36.679023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.986 [2024-11-18 12:08:36.679030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:38.986 [2024-11-18 12:08:36.679035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:38.986 [2024-11-18 12:08:36.679040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.986 [2024-11-18 12:08:36.679045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:38.986 [2024-11-18 12:08:36.679049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:38.986 [2024-11-18 12:08:36.679054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.986 [2024-11-18 12:08:36.679059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:38.986 [2024-11-18 12:08:36.679064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:38.986 [2024-11-18 12:08:36.679069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:38.986 [2024-11-18 12:08:36.679073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:38.986 [2024-11-18 12:08:36.679078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:38.986 [2024-11-18 12:08:36.679083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:38.986 [2024-11-18 12:08:36.679088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:38.986 [2024-11-18 12:08:36.679094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:38.986 [2024-11-18 12:08:36.679099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.986 [2024-11-18 12:08:36.679104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:38.986 [2024-11-18 12:08:36.679109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:38.986 [2024-11-18 12:08:36.679114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.986 [2024-11-18 12:08:36.679120] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:38.986 [2024-11-18 12:08:36.679126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:38.986 [2024-11-18 12:08:36.679131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:38.986 [2024-11-18 12:08:36.679138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.986 [2024-11-18 12:08:36.679144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:38.986 [2024-11-18 12:08:36.679149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:38.986 [2024-11-18 12:08:36.679154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:38.986 [2024-11-18 12:08:36.679160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:38.986 [2024-11-18 12:08:36.679165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:38.986 [2024-11-18 12:08:36.679170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:38.986 [2024-11-18 12:08:36.679176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:38.986 [2024-11-18 12:08:36.679183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.986 [2024-11-18 12:08:36.679190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:38.986 [2024-11-18 12:08:36.679196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:38.986 [2024-11-18 12:08:36.679201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:38.986 [2024-11-18 12:08:36.679206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:38.986 [2024-11-18 12:08:36.679212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:38.986 [2024-11-18 12:08:36.679217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:38.986 [2024-11-18 12:08:36.679222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:38.986 [2024-11-18 12:08:36.679228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:38.986 [2024-11-18 12:08:36.679233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:38.986 [2024-11-18 12:08:36.679238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:38.986 [2024-11-18 12:08:36.679244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:38.986 [2024-11-18 12:08:36.679249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:38.986 [2024-11-18 12:08:36.679254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:38.986 [2024-11-18 12:08:36.679260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:38.986 [2024-11-18 12:08:36.679265] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:38.986 [2024-11-18 12:08:36.679273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.986 [2024-11-18 12:08:36.679280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:38.986 [2024-11-18 12:08:36.679286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:38.986 [2024-11-18 12:08:36.679291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:38.986 [2024-11-18 12:08:36.679297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:38.986 [2024-11-18 12:08:36.679303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.986 [2024-11-18 12:08:36.679309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:38.986 [2024-11-18 12:08:36.679315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:25:38.986 [2024-11-18 12:08:36.679320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.245 [2024-11-18 12:08:36.700285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.245 [2024-11-18 12:08:36.700309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:39.245 [2024-11-18 12:08:36.700317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.933 ms 00:25:39.245 [2024-11-18 12:08:36.700322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.245 [2024-11-18 12:08:36.700389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.245 [2024-11-18 12:08:36.700396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:39.246 [2024-11-18 12:08:36.700402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:39.246 [2024-11-18 12:08:36.700407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.744835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.744862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:39.246 [2024-11-18 12:08:36.744871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.387 ms 00:25:39.246 [2024-11-18 12:08:36.744878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.744907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.744914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:39.246 [2024-11-18 12:08:36.744921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:39.246 [2024-11-18 12:08:36.744929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.745222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.745242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:39.246 [2024-11-18 12:08:36.745250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:25:39.246 [2024-11-18 12:08:36.745256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.745351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.745359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:39.246 [2024-11-18 12:08:36.745365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:39.246 [2024-11-18 12:08:36.745371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.755754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.755774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:39.246 [2024-11-18 12:08:36.755782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.365 ms 00:25:39.246 [2024-11-18 12:08:36.755790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.765903] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:25:39.246 [2024-11-18 12:08:36.765926] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:39.246 [2024-11-18 12:08:36.765935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.765941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:39.246 [2024-11-18 12:08:36.765949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.069 ms 00:25:39.246 [2024-11-18 12:08:36.765954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.786347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.786374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:39.246 [2024-11-18 12:08:36.786382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.363 ms 00:25:39.246 [2024-11-18 12:08:36.786389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.795224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.795251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:39.246 [2024-11-18 12:08:36.795259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.806 ms 00:25:39.246 [2024-11-18 12:08:36.795264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.804109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.804129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:39.246 [2024-11-18 12:08:36.804137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.820 ms 00:25:39.246 [2024-11-18 12:08:36.804142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.804592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.804608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:39.246 [2024-11-18 12:08:36.804616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:25:39.246 [2024-11-18 12:08:36.804624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.847741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.847772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:39.246 [2024-11-18 12:08:36.847785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.103 ms 00:25:39.246 [2024-11-18 12:08:36.847792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.855528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:39.246 [2024-11-18 12:08:36.857124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.857144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:39.246 [2024-11-18 12:08:36.857152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.299 ms 00:25:39.246 [2024-11-18 12:08:36.857157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.857206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.857215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:39.246 [2024-11-18 12:08:36.857222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:39.246 [2024-11-18 12:08:36.857230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.857706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.857727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:39.246 [2024-11-18 12:08:36.857735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:25:39.246 [2024-11-18 12:08:36.857741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.857759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.857765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:39.246 [2024-11-18 12:08:36.857771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:39.246 [2024-11-18 12:08:36.857777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.857803] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:39.246 [2024-11-18 12:08:36.857813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.857819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:39.246 [2024-11-18 12:08:36.857825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:39.246 [2024-11-18 12:08:36.857831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.875840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.875862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:39.246 [2024-11-18 12:08:36.875871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.996 ms 00:25:39.246 [2024-11-18 12:08:36.875880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.875934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.246 [2024-11-18 12:08:36.875941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:39.246 [2024-11-18 12:08:36.875948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:39.246 [2024-11-18 12:08:36.875953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.246 [2024-11-18 12:08:36.876876] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 216.831 ms, result 0 00:25:40.627  [2024-11-18T12:08:39.269Z] Copying: 988/1048576 [kB] (988 kBps) [2024-11-18T12:08:40.205Z] Copying: 2052/1048576 [kB] (1064 kBps) [2024-11-18T12:08:41.148Z] Copying: 5024/1048576 [kB] (2972 kBps) [2024-11-18T12:08:42.090Z] Copying: 16/1024 [MB] (12 MBps) [2024-11-18T12:08:43.032Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-18T12:08:44.416Z] Copying: 51/1024 [MB] (17 MBps) [2024-11-18T12:08:45.358Z] Copying: 81/1024 [MB] (29 MBps) [2024-11-18T12:08:46.299Z] Copying: 100/1024 [MB] (19 MBps) [2024-11-18T12:08:47.243Z] Copying: 119/1024 [MB] (18 MBps) [2024-11-18T12:08:48.189Z] Copying: 148/1024 [MB] (29 MBps) [2024-11-18T12:08:49.134Z] Copying: 171/1024 [MB] (22 MBps) [2024-11-18T12:08:50.078Z] Copying: 193/1024 [MB] (22 MBps) [2024-11-18T12:08:51.022Z] Copying: 220/1024 [MB] (26 MBps) [2024-11-18T12:08:52.409Z] Copying: 242/1024 [MB] (22 MBps) [2024-11-18T12:08:53.354Z] Copying: 271/1024 [MB] (28 MBps) [2024-11-18T12:08:54.301Z] Copying: 286/1024 [MB] (15 MBps) [2024-11-18T12:08:55.242Z] Copying: 313/1024 [MB] (26 MBps) [2024-11-18T12:08:56.186Z] Copying: 345/1024 [MB] (32 MBps) [2024-11-18T12:08:57.127Z] Copying: 365/1024 [MB] (20 MBps) [2024-11-18T12:08:58.085Z] Copying: 383/1024 [MB] (18 MBps) [2024-11-18T12:08:59.079Z] Copying: 419/1024 [MB] (35 MBps) [2024-11-18T12:09:00.038Z] Copying: 450/1024 [MB] (30 MBps) [2024-11-18T12:09:01.424Z] Copying: 480/1024 [MB] (30 MBps) [2024-11-18T12:09:02.368Z] Copying: 512/1024 [MB] (31 MBps) [2024-11-18T12:09:03.309Z] Copying: 539/1024 [MB] (26 MBps) [2024-11-18T12:09:04.253Z] Copying: 570/1024 [MB] (31 MBps) [2024-11-18T12:09:05.195Z] Copying: 601/1024 [MB] (31 MBps) [2024-11-18T12:09:06.137Z] Copying: 636/1024 [MB] (34 MBps) [2024-11-18T12:09:07.079Z] Copying: 667/1024 [MB] (30 MBps) [2024-11-18T12:09:08.023Z] Copying: 696/1024 [MB] (29 MBps) [2024-11-18T12:09:09.408Z] Copying: 726/1024 [MB] (29 MBps) [2024-11-18T12:09:10.349Z] Copying: 756/1024 [MB] (29 MBps) [2024-11-18T12:09:11.293Z] Copying: 790/1024 [MB] (34 MBps) [2024-11-18T12:09:12.238Z] Copying: 819/1024 [MB] (29 MBps) [2024-11-18T12:09:13.181Z] Copying: 838/1024 [MB] (19 MBps) [2024-11-18T12:09:14.123Z] Copying: 863/1024 [MB] (24 MBps) [2024-11-18T12:09:15.066Z] Copying: 895/1024 [MB] (31 MBps) [2024-11-18T12:09:16.456Z] Copying: 921/1024 [MB] (26 MBps) [2024-11-18T12:09:17.030Z] Copying: 952/1024 [MB] (30 MBps) [2024-11-18T12:09:18.423Z] Copying: 985/1024 [MB] (33 MBps) [2024-11-18T12:09:18.423Z] Copying: 1017/1024 [MB] (32 MBps) [2024-11-18T12:09:18.685Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-18 12:09:18.459036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.459213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:20.984 [2024-11-18 12:09:18.459236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:20.984 [2024-11-18 12:09:18.459245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.459272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:20.984 [2024-11-18 12:09:18.462773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.462825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:20.984 [2024-11-18 12:09:18.462838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.482 ms 00:26:20.984 [2024-11-18 12:09:18.462847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.463097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.463117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:20.984 [2024-11-18 12:09:18.463132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:26:20.984 [2024-11-18 12:09:18.463141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.480362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.480419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:20.984 [2024-11-18 12:09:18.480431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.201 ms 00:26:20.984 [2024-11-18 12:09:18.480440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.486661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.486722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:20.984 [2024-11-18 12:09:18.486734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:26:20.984 [2024-11-18 12:09:18.486747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.515260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.515339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:20.984 [2024-11-18 12:09:18.515352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.467 ms 00:26:20.984 [2024-11-18 12:09:18.515361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.531638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.531694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:20.984 [2024-11-18 12:09:18.531708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.223 ms 00:26:20.984 [2024-11-18 12:09:18.531717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.535836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.535888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:20.984 [2024-11-18 12:09:18.535900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.057 ms 00:26:20.984 [2024-11-18 12:09:18.535909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.562634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.562686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:20.984 [2024-11-18 12:09:18.562699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.698 ms 00:26:20.984 [2024-11-18 12:09:18.562706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.588656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.588705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:20.984 [2024-11-18 12:09:18.588732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.898 ms 00:26:20.984 [2024-11-18 12:09:18.588739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.614489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.614542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:20.984 [2024-11-18 12:09:18.614555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.697 ms 00:26:20.984 [2024-11-18 12:09:18.614562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.639729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.984 [2024-11-18 12:09:18.639779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:20.984 [2024-11-18 12:09:18.639791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.054 ms 00:26:20.984 [2024-11-18 12:09:18.639799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.984 [2024-11-18 12:09:18.639848] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:20.984 [2024-11-18 12:09:18.639865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:20.984 [2024-11-18 12:09:18.639877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:20.984 [2024-11-18 12:09:18.639886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:20.984 [2024-11-18 12:09:18.639894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.639998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:20.985 [2024-11-18 12:09:18.640614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:20.986 [2024-11-18 12:09:18.640698] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:20.986 [2024-11-18 12:09:18.640707] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0494e82a-5272-4595-8f59-cd2d8a7eb200 00:26:20.986 [2024-11-18 12:09:18.640716] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:20.986 [2024-11-18 12:09:18.640723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263872 00:26:20.986 [2024-11-18 12:09:18.640731] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261888 00:26:20.986 [2024-11-18 12:09:18.640748] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:26:20.986 [2024-11-18 12:09:18.640755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:20.986 [2024-11-18 12:09:18.640763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:20.986 [2024-11-18 12:09:18.640772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:20.986 [2024-11-18 12:09:18.640786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:20.986 [2024-11-18 12:09:18.640793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:20.986 [2024-11-18 12:09:18.640801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.986 [2024-11-18 12:09:18.640809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:20.986 [2024-11-18 12:09:18.640819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:26:20.986 [2024-11-18 12:09:18.640827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.986 [2024-11-18 12:09:18.654760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.986 [2024-11-18 12:09:18.654813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:20.986 [2024-11-18 12:09:18.654824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.912 ms 00:26:20.986 [2024-11-18 12:09:18.654833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.986 [2024-11-18 12:09:18.655241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.986 [2024-11-18 12:09:18.655258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:20.986 [2024-11-18 12:09:18.655267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:26:20.986 [2024-11-18 12:09:18.655274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.692776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.692835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.248 [2024-11-18 12:09:18.692848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.692856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.692921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.692931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.248 [2024-11-18 12:09:18.692939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.692948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.693046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.693063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.248 [2024-11-18 12:09:18.693072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.693079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.693095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.693103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.248 [2024-11-18 12:09:18.693110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.693118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.780032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.780091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.248 [2024-11-18 12:09:18.780105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.780114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.849835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.849895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.248 [2024-11-18 12:09:18.849908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.849917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.849974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.849984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.248 [2024-11-18 12:09:18.850001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.850009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.850070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.850081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.248 [2024-11-18 12:09:18.850090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.850098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.850198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.850209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.248 [2024-11-18 12:09:18.850218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.850229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.850266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.850276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:21.248 [2024-11-18 12:09:18.850285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.850293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.850336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.850346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.248 [2024-11-18 12:09:18.850354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.850365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.850411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.248 [2024-11-18 12:09:18.850423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.248 [2024-11-18 12:09:18.850432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.248 [2024-11-18 12:09:18.850440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.248 [2024-11-18 12:09:18.850576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.502 ms, result 0 00:26:22.192 00:26:22.192 00:26:22.192 12:09:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:24.743 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:24.743 12:09:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:24.743 [2024-11-18 12:09:21.934771] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:26:24.743 [2024-11-18 12:09:21.934927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79549 ] 00:26:24.743 [2024-11-18 12:09:22.101552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.743 [2024-11-18 12:09:22.219318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.004 [2024-11-18 12:09:22.510313] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:25.004 [2024-11-18 12:09:22.510406] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:25.004 [2024-11-18 12:09:22.670406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.004 [2024-11-18 12:09:22.670475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:25.004 [2024-11-18 12:09:22.670502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:25.004 [2024-11-18 12:09:22.670514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.004 [2024-11-18 12:09:22.670604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.004 [2024-11-18 12:09:22.670620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.004 [2024-11-18 12:09:22.670638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:25.004 [2024-11-18 12:09:22.670651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.004 [2024-11-18 12:09:22.670683] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:25.004 [2024-11-18 12:09:22.671759] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:25.004 [2024-11-18 12:09:22.671809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.004 [2024-11-18 12:09:22.671822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.004 [2024-11-18 12:09:22.671835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:26:25.004 [2024-11-18 12:09:22.671846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.004 [2024-11-18 12:09:22.673643] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:25.005 [2024-11-18 12:09:22.687730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.687782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:25.005 [2024-11-18 12:09:22.687801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.090 ms 00:26:25.005 [2024-11-18 12:09:22.687812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.687911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.687927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:25.005 [2024-11-18 12:09:22.687941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:25.005 [2024-11-18 12:09:22.687954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.696095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.696143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.005 [2024-11-18 12:09:22.696158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.033 ms 00:26:25.005 [2024-11-18 12:09:22.696170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.696281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.696298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.005 [2024-11-18 12:09:22.696318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:25.005 [2024-11-18 12:09:22.696330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.696392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.696408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:25.005 [2024-11-18 12:09:22.696422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:25.005 [2024-11-18 12:09:22.696435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.696468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:25.005 [2024-11-18 12:09:22.700676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.700720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.005 [2024-11-18 12:09:22.700736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:26:25.005 [2024-11-18 12:09:22.700751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.700801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.700815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:25.005 [2024-11-18 12:09:22.700827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:25.005 [2024-11-18 12:09:22.700839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.700913] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:25.005 [2024-11-18 12:09:22.700945] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:25.005 [2024-11-18 12:09:22.700999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:25.005 [2024-11-18 12:09:22.701029] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:25.005 [2024-11-18 12:09:22.701178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:25.005 [2024-11-18 12:09:22.701198] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:25.005 [2024-11-18 12:09:22.701215] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:25.005 [2024-11-18 12:09:22.701232] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:25.005 [2024-11-18 12:09:22.701247] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:25.005 [2024-11-18 12:09:22.701262] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:25.005 [2024-11-18 12:09:22.701273] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:25.005 [2024-11-18 12:09:22.701286] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:25.005 [2024-11-18 12:09:22.701298] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:25.005 [2024-11-18 12:09:22.701315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.701328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:25.005 [2024-11-18 12:09:22.701343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:26:25.005 [2024-11-18 12:09:22.701354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.701477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.005 [2024-11-18 12:09:22.701504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:25.005 [2024-11-18 12:09:22.701519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:25.005 [2024-11-18 12:09:22.701531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.005 [2024-11-18 12:09:22.701697] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:25.005 [2024-11-18 12:09:22.701722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:25.005 [2024-11-18 12:09:22.701737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:25.005 [2024-11-18 12:09:22.701751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.005 [2024-11-18 12:09:22.701764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:25.005 [2024-11-18 12:09:22.701776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:25.005 [2024-11-18 12:09:22.701790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:25.005 [2024-11-18 12:09:22.701802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:25.005 [2024-11-18 12:09:22.701815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:25.005 [2024-11-18 12:09:22.701827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:25.005 [2024-11-18 12:09:22.701839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:25.005 [2024-11-18 12:09:22.701851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:25.005 [2024-11-18 12:09:22.701863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:25.005 [2024-11-18 12:09:22.701874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:25.005 [2024-11-18 12:09:22.701887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:25.005 [2024-11-18 12:09:22.701906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.005 [2024-11-18 12:09:22.701918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:25.005 [2024-11-18 12:09:22.701929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:25.005 [2024-11-18 12:09:22.701940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.005 [2024-11-18 12:09:22.701951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:25.005 [2024-11-18 12:09:22.701963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:25.005 [2024-11-18 12:09:22.701974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.005 [2024-11-18 12:09:22.701985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:25.005 [2024-11-18 12:09:22.701997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:25.005 [2024-11-18 12:09:22.702008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.005 [2024-11-18 12:09:22.702019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:25.005 [2024-11-18 12:09:22.702031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:25.005 [2024-11-18 12:09:22.702042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.005 [2024-11-18 12:09:22.702053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:25.005 [2024-11-18 12:09:22.702064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:25.005 [2024-11-18 12:09:22.702076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.005 [2024-11-18 12:09:22.702088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:25.005 [2024-11-18 12:09:22.702100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:25.005 [2024-11-18 12:09:22.702111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:25.005 [2024-11-18 12:09:22.702123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:25.005 [2024-11-18 12:09:22.702135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:25.005 [2024-11-18 12:09:22.702148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:25.005 [2024-11-18 12:09:22.702160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:25.005 [2024-11-18 12:09:22.702173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:25.005 [2024-11-18 12:09:22.702185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.268 [2024-11-18 12:09:22.702197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:25.268 [2024-11-18 12:09:22.702209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:25.268 [2024-11-18 12:09:22.702221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.268 [2024-11-18 12:09:22.702232] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:25.268 [2024-11-18 12:09:22.702245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:25.268 [2024-11-18 12:09:22.702258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:25.268 [2024-11-18 12:09:22.702270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.268 [2024-11-18 12:09:22.702283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:25.268 [2024-11-18 12:09:22.702295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:25.268 [2024-11-18 12:09:22.702307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:25.268 [2024-11-18 12:09:22.702319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:25.268 [2024-11-18 12:09:22.702330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:25.268 [2024-11-18 12:09:22.702343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:25.268 [2024-11-18 12:09:22.702357] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:25.268 [2024-11-18 12:09:22.702374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.268 [2024-11-18 12:09:22.702389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:25.268 [2024-11-18 12:09:22.702403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:25.268 [2024-11-18 12:09:22.702416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:25.268 [2024-11-18 12:09:22.702429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:25.268 [2024-11-18 12:09:22.702442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:25.268 [2024-11-18 12:09:22.702455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:25.268 [2024-11-18 12:09:22.702468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:25.268 [2024-11-18 12:09:22.702482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:25.268 [2024-11-18 12:09:22.702496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:25.268 [2024-11-18 12:09:22.702510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:25.268 [2024-11-18 12:09:22.702524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:25.268 [2024-11-18 12:09:22.702537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:25.268 [2024-11-18 12:09:22.702550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:25.268 [2024-11-18 12:09:22.702566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:25.268 [2024-11-18 12:09:22.702578] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:25.268 [2024-11-18 12:09:22.702614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.268 [2024-11-18 12:09:22.702629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:25.268 [2024-11-18 12:09:22.702642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:25.268 [2024-11-18 12:09:22.702656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:25.268 [2024-11-18 12:09:22.702670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:25.268 [2024-11-18 12:09:22.702684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.702697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:25.268 [2024-11-18 12:09:22.702711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:26:25.268 [2024-11-18 12:09:22.702725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.734970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.735027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.268 [2024-11-18 12:09:22.735044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.173 ms 00:26:25.268 [2024-11-18 12:09:22.735056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.735172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.735189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:25.268 [2024-11-18 12:09:22.735203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:25.268 [2024-11-18 12:09:22.735216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.781285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.781339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.268 [2024-11-18 12:09:22.781358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.990 ms 00:26:25.268 [2024-11-18 12:09:22.781370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.781430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.781445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.268 [2024-11-18 12:09:22.781461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:25.268 [2024-11-18 12:09:22.781480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.782168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.782216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.268 [2024-11-18 12:09:22.782233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:26:25.268 [2024-11-18 12:09:22.782244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.782458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.782483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.268 [2024-11-18 12:09:22.782497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:26:25.268 [2024-11-18 12:09:22.782520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.799190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.799238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.268 [2024-11-18 12:09:22.799257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.637 ms 00:26:25.268 [2024-11-18 12:09:22.799269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.814299] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:25.268 [2024-11-18 12:09:22.814344] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:25.268 [2024-11-18 12:09:22.814363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.814375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:25.268 [2024-11-18 12:09:22.814388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.948 ms 00:26:25.268 [2024-11-18 12:09:22.814400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.840443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.840504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:25.268 [2024-11-18 12:09:22.840523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.976 ms 00:26:25.268 [2024-11-18 12:09:22.840535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.853542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.853611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:25.268 [2024-11-18 12:09:22.853628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.928 ms 00:26:25.268 [2024-11-18 12:09:22.853640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.268 [2024-11-18 12:09:22.866226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.268 [2024-11-18 12:09:22.866274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:25.269 [2024-11-18 12:09:22.866291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.528 ms 00:26:25.269 [2024-11-18 12:09:22.866302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.269 [2024-11-18 12:09:22.867027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.269 [2024-11-18 12:09:22.867069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:25.269 [2024-11-18 12:09:22.867085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:26:25.269 [2024-11-18 12:09:22.867100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.269 [2024-11-18 12:09:22.932094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.269 [2024-11-18 12:09:22.932162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:25.269 [2024-11-18 12:09:22.932195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.963 ms 00:26:25.269 [2024-11-18 12:09:22.932208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.269 [2024-11-18 12:09:22.943381] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:25.269 [2024-11-18 12:09:22.946469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.269 [2024-11-18 12:09:22.946521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:25.269 [2024-11-18 12:09:22.946538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.192 ms 00:26:25.269 [2024-11-18 12:09:22.946551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.269 [2024-11-18 12:09:22.946677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.269 [2024-11-18 12:09:22.946695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:25.269 [2024-11-18 12:09:22.946711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:25.269 [2024-11-18 12:09:22.946729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.269 [2024-11-18 12:09:22.947687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.269 [2024-11-18 12:09:22.947749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:25.269 [2024-11-18 12:09:22.947768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:26:25.269 [2024-11-18 12:09:22.947780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.269 [2024-11-18 12:09:22.947832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.269 [2024-11-18 12:09:22.947847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:25.269 [2024-11-18 12:09:22.947862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:25.269 [2024-11-18 12:09:22.947876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.269 [2024-11-18 12:09:22.947928] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:25.269 [2024-11-18 12:09:22.947949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.269 [2024-11-18 12:09:22.947964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:25.269 [2024-11-18 12:09:22.947979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:25.269 [2024-11-18 12:09:22.947992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.530 [2024-11-18 12:09:22.974902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.530 [2024-11-18 12:09:22.974956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:25.530 [2024-11-18 12:09:22.974975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.876 ms 00:26:25.530 [2024-11-18 12:09:22.974995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.530 [2024-11-18 12:09:22.975113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.530 [2024-11-18 12:09:22.975132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:25.530 [2024-11-18 12:09:22.975147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:25.530 [2024-11-18 12:09:22.975160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.530 [2024-11-18 12:09:22.976523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.624 ms, result 0 00:26:26.477  [2024-11-18T12:09:25.567Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-18T12:09:26.512Z] Copying: 38/1024 [MB] (18 MBps) [2024-11-18T12:09:27.457Z] Copying: 55/1024 [MB] (17 MBps) [2024-11-18T12:09:28.401Z] Copying: 71/1024 [MB] (16 MBps) [2024-11-18T12:09:29.346Z] Copying: 95/1024 [MB] (23 MBps) [2024-11-18T12:09:30.289Z] Copying: 112/1024 [MB] (17 MBps) [2024-11-18T12:09:31.236Z] Copying: 126/1024 [MB] (13 MBps) [2024-11-18T12:09:32.183Z] Copying: 140/1024 [MB] (13 MBps) [2024-11-18T12:09:33.610Z] Copying: 158/1024 [MB] (17 MBps) [2024-11-18T12:09:34.241Z] Copying: 179/1024 [MB] (21 MBps) [2024-11-18T12:09:35.182Z] Copying: 190/1024 [MB] (11 MBps) [2024-11-18T12:09:36.568Z] Copying: 209/1024 [MB] (18 MBps) [2024-11-18T12:09:37.512Z] Copying: 228/1024 [MB] (18 MBps) [2024-11-18T12:09:38.455Z] Copying: 246/1024 [MB] (18 MBps) [2024-11-18T12:09:39.395Z] Copying: 265/1024 [MB] (18 MBps) [2024-11-18T12:09:40.336Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-18T12:09:41.276Z] Copying: 287/1024 [MB] (10 MBps) [2024-11-18T12:09:42.214Z] Copying: 298/1024 [MB] (11 MBps) [2024-11-18T12:09:43.159Z] Copying: 321/1024 [MB] (23 MBps) [2024-11-18T12:09:44.547Z] Copying: 337/1024 [MB] (15 MBps) [2024-11-18T12:09:45.492Z] Copying: 352/1024 [MB] (15 MBps) [2024-11-18T12:09:46.436Z] Copying: 364/1024 [MB] (12 MBps) [2024-11-18T12:09:47.378Z] Copying: 382/1024 [MB] (17 MBps) [2024-11-18T12:09:48.319Z] Copying: 405/1024 [MB] (22 MBps) [2024-11-18T12:09:49.264Z] Copying: 418/1024 [MB] (13 MBps) [2024-11-18T12:09:50.207Z] Copying: 435/1024 [MB] (16 MBps) [2024-11-18T12:09:51.594Z] Copying: 447/1024 [MB] (11 MBps) [2024-11-18T12:09:52.166Z] Copying: 460/1024 [MB] (13 MBps) [2024-11-18T12:09:53.552Z] Copying: 476/1024 [MB] (16 MBps) [2024-11-18T12:09:54.493Z] Copying: 488/1024 [MB] (11 MBps) [2024-11-18T12:09:55.438Z] Copying: 501/1024 [MB] (12 MBps) [2024-11-18T12:09:56.380Z] Copying: 518/1024 [MB] (17 MBps) [2024-11-18T12:09:57.325Z] Copying: 541/1024 [MB] (22 MBps) [2024-11-18T12:09:58.268Z] Copying: 552/1024 [MB] (11 MBps) [2024-11-18T12:09:59.214Z] Copying: 563/1024 [MB] (11 MBps) [2024-11-18T12:10:00.599Z] Copying: 584/1024 [MB] (20 MBps) [2024-11-18T12:10:01.174Z] Copying: 601/1024 [MB] (16 MBps) [2024-11-18T12:10:02.562Z] Copying: 612/1024 [MB] (10 MBps) [2024-11-18T12:10:03.508Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-18T12:10:04.452Z] Copying: 634/1024 [MB] (11 MBps) [2024-11-18T12:10:05.401Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-18T12:10:06.347Z] Copying: 654/1024 [MB] (10 MBps) [2024-11-18T12:10:07.294Z] Copying: 666/1024 [MB] (11 MBps) [2024-11-18T12:10:08.246Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-18T12:10:09.237Z] Copying: 687/1024 [MB] (10 MBps) [2024-11-18T12:10:10.183Z] Copying: 698/1024 [MB] (11 MBps) [2024-11-18T12:10:11.573Z] Copying: 709/1024 [MB] (10 MBps) [2024-11-18T12:10:12.518Z] Copying: 719/1024 [MB] (10 MBps) [2024-11-18T12:10:13.464Z] Copying: 730/1024 [MB] (10 MBps) [2024-11-18T12:10:14.409Z] Copying: 742/1024 [MB] (11 MBps) [2024-11-18T12:10:15.354Z] Copying: 752/1024 [MB] (10 MBps) [2024-11-18T12:10:16.299Z] Copying: 763/1024 [MB] (10 MBps) [2024-11-18T12:10:17.247Z] Copying: 777/1024 [MB] (14 MBps) [2024-11-18T12:10:18.193Z] Copying: 788/1024 [MB] (10 MBps) [2024-11-18T12:10:19.577Z] Copying: 800/1024 [MB] (12 MBps) [2024-11-18T12:10:20.515Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-18T12:10:21.460Z] Copying: 823/1024 [MB] (12 MBps) [2024-11-18T12:10:22.406Z] Copying: 839/1024 [MB] (16 MBps) [2024-11-18T12:10:23.349Z] Copying: 852/1024 [MB] (13 MBps) [2024-11-18T12:10:24.293Z] Copying: 863/1024 [MB] (10 MBps) [2024-11-18T12:10:25.237Z] Copying: 876/1024 [MB] (13 MBps) [2024-11-18T12:10:26.181Z] Copying: 892/1024 [MB] (16 MBps) [2024-11-18T12:10:27.570Z] Copying: 910/1024 [MB] (18 MBps) [2024-11-18T12:10:28.515Z] Copying: 922/1024 [MB] (11 MBps) [2024-11-18T12:10:29.458Z] Copying: 943/1024 [MB] (21 MBps) [2024-11-18T12:10:30.404Z] Copying: 964/1024 [MB] (21 MBps) [2024-11-18T12:10:31.349Z] Copying: 986/1024 [MB] (21 MBps) [2024-11-18T12:10:31.920Z] Copying: 1008/1024 [MB] (21 MBps) [2024-11-18T12:10:32.182Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 12:10:31.929800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:31.929890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:34.481 [2024-11-18 12:10:31.929914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.481 [2024-11-18 12:10:31.929927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:31.929961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:34.481 [2024-11-18 12:10:31.933243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:31.933297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:34.481 [2024-11-18 12:10:31.933325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:27:34.481 [2024-11-18 12:10:31.933360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:31.933702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:31.933735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:34.481 [2024-11-18 12:10:31.933751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:27:34.481 [2024-11-18 12:10:31.933764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:31.937594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:31.937655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:34.481 [2024-11-18 12:10:31.937672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:27:34.481 [2024-11-18 12:10:31.937685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:31.944392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:31.944445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:34.481 [2024-11-18 12:10:31.944463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.661 ms 00:27:34.481 [2024-11-18 12:10:31.944476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:31.972549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:31.972612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:34.481 [2024-11-18 12:10:31.972631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.939 ms 00:27:34.481 [2024-11-18 12:10:31.972643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:31.989349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:31.989402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:34.481 [2024-11-18 12:10:31.989422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.637 ms 00:27:34.481 [2024-11-18 12:10:31.989435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:31.994659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:31.994722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:34.481 [2024-11-18 12:10:31.994738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.149 ms 00:27:34.481 [2024-11-18 12:10:31.994750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:32.021646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:32.021698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:34.481 [2024-11-18 12:10:32.021716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.870 ms 00:27:34.481 [2024-11-18 12:10:32.021727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:32.048517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:32.048601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:34.481 [2024-11-18 12:10:32.048621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.660 ms 00:27:34.481 [2024-11-18 12:10:32.048632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:32.074963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:32.075015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:34.481 [2024-11-18 12:10:32.075033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.207 ms 00:27:34.481 [2024-11-18 12:10:32.075044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:32.102217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.481 [2024-11-18 12:10:32.102281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:34.481 [2024-11-18 12:10:32.102301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.051 ms 00:27:34.481 [2024-11-18 12:10:32.102313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.481 [2024-11-18 12:10:32.102377] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:34.481 [2024-11-18 12:10:32.102401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:34.481 [2024-11-18 12:10:32.102425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:34.481 [2024-11-18 12:10:32.102439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.481 [2024-11-18 12:10:32.102627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.102993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:34.482 [2024-11-18 12:10:32.103833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:34.482 [2024-11-18 12:10:32.103855] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0494e82a-5272-4595-8f59-cd2d8a7eb200 00:27:34.482 [2024-11-18 12:10:32.103868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:34.482 [2024-11-18 12:10:32.103879] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:34.482 [2024-11-18 12:10:32.103891] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:34.483 [2024-11-18 12:10:32.103903] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:34.483 [2024-11-18 12:10:32.103914] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:34.483 [2024-11-18 12:10:32.103927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:34.483 [2024-11-18 12:10:32.103949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:34.483 [2024-11-18 12:10:32.103961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:34.483 [2024-11-18 12:10:32.103969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:34.483 [2024-11-18 12:10:32.103978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.483 [2024-11-18 12:10:32.103987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:34.483 [2024-11-18 12:10:32.104001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:27:34.483 [2024-11-18 12:10:32.104012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.483 [2024-11-18 12:10:32.117924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.483 [2024-11-18 12:10:32.117981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:34.483 [2024-11-18 12:10:32.118000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.850 ms 00:27:34.483 [2024-11-18 12:10:32.118011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.483 [2024-11-18 12:10:32.118487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.483 [2024-11-18 12:10:32.118527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:34.483 [2024-11-18 12:10:32.118552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:27:34.483 [2024-11-18 12:10:32.118565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.483 [2024-11-18 12:10:32.155448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.483 [2024-11-18 12:10:32.155521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:34.483 [2024-11-18 12:10:32.155539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.483 [2024-11-18 12:10:32.155553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.483 [2024-11-18 12:10:32.155652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.483 [2024-11-18 12:10:32.155670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:34.483 [2024-11-18 12:10:32.155695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.483 [2024-11-18 12:10:32.155709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.483 [2024-11-18 12:10:32.155844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.483 [2024-11-18 12:10:32.155881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:34.483 [2024-11-18 12:10:32.155899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.483 [2024-11-18 12:10:32.155910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.483 [2024-11-18 12:10:32.155935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.483 [2024-11-18 12:10:32.155946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:34.483 [2024-11-18 12:10:32.155957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.483 [2024-11-18 12:10:32.155972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.241184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.745 [2024-11-18 12:10:32.241250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.745 [2024-11-18 12:10:32.241264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.745 [2024-11-18 12:10:32.241273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.311279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.745 [2024-11-18 12:10:32.311343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.745 [2024-11-18 12:10:32.311356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.745 [2024-11-18 12:10:32.311372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.311436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.745 [2024-11-18 12:10:32.311446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.745 [2024-11-18 12:10:32.311455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.745 [2024-11-18 12:10:32.311477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.311552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.745 [2024-11-18 12:10:32.311567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.745 [2024-11-18 12:10:32.311620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.745 [2024-11-18 12:10:32.311635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.311780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.745 [2024-11-18 12:10:32.311794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.745 [2024-11-18 12:10:32.311807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.745 [2024-11-18 12:10:32.311820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.311867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.745 [2024-11-18 12:10:32.311882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.745 [2024-11-18 12:10:32.311895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.745 [2024-11-18 12:10:32.311907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.311969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.745 [2024-11-18 12:10:32.311984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.745 [2024-11-18 12:10:32.311999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.745 [2024-11-18 12:10:32.312012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.312079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.745 [2024-11-18 12:10:32.312152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.745 [2024-11-18 12:10:32.312167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.745 [2024-11-18 12:10:32.312182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.745 [2024-11-18 12:10:32.312357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.513 ms, result 0 00:27:35.689 00:27:35.689 00:27:35.689 12:10:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:38.237 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:38.237 Process with pid 77590 is not found 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77590 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 77590 ']' 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 77590 00:27:38.237 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77590) - No such process 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 77590 is not found' 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:38.237 Remove shared memory files 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:38.237 00:27:38.237 real 4m18.467s 00:27:38.237 user 4m51.681s 00:27:38.237 sys 0m29.016s 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:38.237 12:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:38.237 ************************************ 00:27:38.237 END TEST ftl_dirty_shutdown 00:27:38.237 ************************************ 00:27:38.237 12:10:35 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:38.237 12:10:35 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:38.237 12:10:35 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:38.237 12:10:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:38.237 ************************************ 00:27:38.237 START TEST ftl_upgrade_shutdown 00:27:38.237 ************************************ 00:27:38.237 12:10:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:38.499 * Looking for test storage... 00:27:38.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:38.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.500 --rc genhtml_branch_coverage=1 00:27:38.500 --rc genhtml_function_coverage=1 00:27:38.500 --rc genhtml_legend=1 00:27:38.500 --rc geninfo_all_blocks=1 00:27:38.500 --rc geninfo_unexecuted_blocks=1 00:27:38.500 00:27:38.500 ' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:38.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.500 --rc genhtml_branch_coverage=1 00:27:38.500 --rc genhtml_function_coverage=1 00:27:38.500 --rc genhtml_legend=1 00:27:38.500 --rc geninfo_all_blocks=1 00:27:38.500 --rc geninfo_unexecuted_blocks=1 00:27:38.500 00:27:38.500 ' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:38.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.500 --rc genhtml_branch_coverage=1 00:27:38.500 --rc genhtml_function_coverage=1 00:27:38.500 --rc genhtml_legend=1 00:27:38.500 --rc geninfo_all_blocks=1 00:27:38.500 --rc geninfo_unexecuted_blocks=1 00:27:38.500 00:27:38.500 ' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:38.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.500 --rc genhtml_branch_coverage=1 00:27:38.500 --rc genhtml_function_coverage=1 00:27:38.500 --rc genhtml_legend=1 00:27:38.500 --rc geninfo_all_blocks=1 00:27:38.500 --rc geninfo_unexecuted_blocks=1 00:27:38.500 00:27:38.500 ' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80367 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:38.500 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80367 00:27:38.501 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80367 ']' 00:27:38.501 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.501 12:10:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:38.501 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:38.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.501 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.501 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:38.501 12:10:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:38.501 [2024-11-18 12:10:36.191147] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:27:38.501 [2024-11-18 12:10:36.191305] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80367 ] 00:27:38.761 [2024-11-18 12:10:36.353565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.022 [2024-11-18 12:10:36.477172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:39.593 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:39.853 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:39.854 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:39.854 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:39.854 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:27:39.854 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:39.854 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:39.854 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:39.854 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:40.116 { 00:27:40.116 "name": "basen1", 00:27:40.116 "aliases": [ 00:27:40.116 "4ad86d49-a6b9-4cff-94b5-54d73e128be5" 00:27:40.116 ], 00:27:40.116 "product_name": "NVMe disk", 00:27:40.116 "block_size": 4096, 00:27:40.116 "num_blocks": 1310720, 00:27:40.116 "uuid": "4ad86d49-a6b9-4cff-94b5-54d73e128be5", 00:27:40.116 "numa_id": -1, 00:27:40.116 "assigned_rate_limits": { 00:27:40.116 "rw_ios_per_sec": 0, 00:27:40.116 "rw_mbytes_per_sec": 0, 00:27:40.116 "r_mbytes_per_sec": 0, 00:27:40.116 "w_mbytes_per_sec": 0 00:27:40.116 }, 00:27:40.116 "claimed": true, 00:27:40.116 "claim_type": "read_many_write_one", 00:27:40.116 "zoned": false, 00:27:40.116 "supported_io_types": { 00:27:40.116 "read": true, 00:27:40.116 "write": true, 00:27:40.116 "unmap": true, 00:27:40.116 "flush": true, 00:27:40.116 "reset": true, 00:27:40.116 "nvme_admin": true, 00:27:40.116 "nvme_io": true, 00:27:40.116 "nvme_io_md": false, 00:27:40.116 "write_zeroes": true, 00:27:40.116 "zcopy": false, 00:27:40.116 "get_zone_info": false, 00:27:40.116 "zone_management": false, 00:27:40.116 "zone_append": false, 00:27:40.116 "compare": true, 00:27:40.116 "compare_and_write": false, 00:27:40.116 "abort": true, 00:27:40.116 "seek_hole": false, 00:27:40.116 "seek_data": false, 00:27:40.116 "copy": true, 00:27:40.116 "nvme_iov_md": false 00:27:40.116 }, 00:27:40.116 "driver_specific": { 00:27:40.116 "nvme": [ 00:27:40.116 { 00:27:40.116 "pci_address": "0000:00:11.0", 00:27:40.116 "trid": { 00:27:40.116 "trtype": "PCIe", 00:27:40.116 "traddr": "0000:00:11.0" 00:27:40.116 }, 00:27:40.116 "ctrlr_data": { 00:27:40.116 "cntlid": 0, 00:27:40.116 "vendor_id": "0x1b36", 00:27:40.116 "model_number": "QEMU NVMe Ctrl", 00:27:40.116 "serial_number": "12341", 00:27:40.116 "firmware_revision": "8.0.0", 00:27:40.116 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:40.116 "oacs": { 00:27:40.116 "security": 0, 00:27:40.116 "format": 1, 00:27:40.116 "firmware": 0, 00:27:40.116 "ns_manage": 1 00:27:40.116 }, 00:27:40.116 "multi_ctrlr": false, 00:27:40.116 "ana_reporting": false 00:27:40.116 }, 00:27:40.116 "vs": { 00:27:40.116 "nvme_version": "1.4" 00:27:40.116 }, 00:27:40.116 "ns_data": { 00:27:40.116 "id": 1, 00:27:40.116 "can_share": false 00:27:40.116 } 00:27:40.116 } 00:27:40.116 ], 00:27:40.116 "mp_policy": "active_passive" 00:27:40.116 } 00:27:40.116 } 00:27:40.116 ]' 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:40.116 12:10:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:40.378 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5be85932-ebc9-45a3-975b-297f666c3472 00:27:40.378 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:40.378 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5be85932-ebc9-45a3-975b-297f666c3472 00:27:40.640 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:40.901 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=bc730cdf-58e9-479c-a392-d5fc697e814d 00:27:40.901 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u bc730cdf-58e9-479c-a392-d5fc697e814d 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=b08ee181-f167-44cd-ada7-971f152ab23f 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z b08ee181-f167-44cd-ada7-971f152ab23f ]] 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 b08ee181-f167-44cd-ada7-971f152ab23f 5120 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=b08ee181-f167-44cd-ada7-971f152ab23f 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size b08ee181-f167-44cd-ada7-971f152ab23f 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=b08ee181-f167-44cd-ada7-971f152ab23f 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:41.160 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b08ee181-f167-44cd-ada7-971f152ab23f 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:41.419 { 00:27:41.419 "name": "b08ee181-f167-44cd-ada7-971f152ab23f", 00:27:41.419 "aliases": [ 00:27:41.419 "lvs/basen1p0" 00:27:41.419 ], 00:27:41.419 "product_name": "Logical Volume", 00:27:41.419 "block_size": 4096, 00:27:41.419 "num_blocks": 5242880, 00:27:41.419 "uuid": "b08ee181-f167-44cd-ada7-971f152ab23f", 00:27:41.419 "assigned_rate_limits": { 00:27:41.419 "rw_ios_per_sec": 0, 00:27:41.419 "rw_mbytes_per_sec": 0, 00:27:41.419 "r_mbytes_per_sec": 0, 00:27:41.419 "w_mbytes_per_sec": 0 00:27:41.419 }, 00:27:41.419 "claimed": false, 00:27:41.419 "zoned": false, 00:27:41.419 "supported_io_types": { 00:27:41.419 "read": true, 00:27:41.419 "write": true, 00:27:41.419 "unmap": true, 00:27:41.419 "flush": false, 00:27:41.419 "reset": true, 00:27:41.419 "nvme_admin": false, 00:27:41.419 "nvme_io": false, 00:27:41.419 "nvme_io_md": false, 00:27:41.419 "write_zeroes": true, 00:27:41.419 "zcopy": false, 00:27:41.419 "get_zone_info": false, 00:27:41.419 "zone_management": false, 00:27:41.419 "zone_append": false, 00:27:41.419 "compare": false, 00:27:41.419 "compare_and_write": false, 00:27:41.419 "abort": false, 00:27:41.419 "seek_hole": true, 00:27:41.419 "seek_data": true, 00:27:41.419 "copy": false, 00:27:41.419 "nvme_iov_md": false 00:27:41.419 }, 00:27:41.419 "driver_specific": { 00:27:41.419 "lvol": { 00:27:41.419 "lvol_store_uuid": "bc730cdf-58e9-479c-a392-d5fc697e814d", 00:27:41.419 "base_bdev": "basen1", 00:27:41.419 "thin_provision": true, 00:27:41.419 "num_allocated_clusters": 0, 00:27:41.419 "snapshot": false, 00:27:41.419 "clone": false, 00:27:41.419 "esnap_clone": false 00:27:41.419 } 00:27:41.419 } 00:27:41.419 } 00:27:41.419 ]' 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:41.419 12:10:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:41.680 12:10:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:41.680 12:10:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:41.680 12:10:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:41.941 12:10:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:41.941 12:10:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:41.941 12:10:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d b08ee181-f167-44cd-ada7-971f152ab23f -c cachen1p0 --l2p_dram_limit 2 00:27:41.941 [2024-11-18 12:10:39.626981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.941 [2024-11-18 12:10:39.627050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:41.941 [2024-11-18 12:10:39.627069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:41.941 [2024-11-18 12:10:39.627079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.941 [2024-11-18 12:10:39.627150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.941 [2024-11-18 12:10:39.627160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:41.941 [2024-11-18 12:10:39.627172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:27:41.941 [2024-11-18 12:10:39.627180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.941 [2024-11-18 12:10:39.627204] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:41.941 [2024-11-18 12:10:39.629995] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:41.941 [2024-11-18 12:10:39.630055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.941 [2024-11-18 12:10:39.630064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:41.941 [2024-11-18 12:10:39.630077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.852 ms 00:27:41.941 [2024-11-18 12:10:39.630086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.941 [2024-11-18 12:10:39.630185] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID e7300cbd-bb91-4fde-a772-b3a343c981cd 00:27:41.941 [2024-11-18 12:10:39.632057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.941 [2024-11-18 12:10:39.632110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:41.941 [2024-11-18 12:10:39.632122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:41.941 [2024-11-18 12:10:39.632132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.204 [2024-11-18 12:10:39.641434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.204 [2024-11-18 12:10:39.641489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:42.204 [2024-11-18 12:10:39.641500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.225 ms 00:27:42.204 [2024-11-18 12:10:39.641511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.204 [2024-11-18 12:10:39.641561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.204 [2024-11-18 12:10:39.641572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:42.204 [2024-11-18 12:10:39.641600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:42.204 [2024-11-18 12:10:39.641615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.204 [2024-11-18 12:10:39.641672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.204 [2024-11-18 12:10:39.641687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:42.204 [2024-11-18 12:10:39.641695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:42.204 [2024-11-18 12:10:39.641711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.204 [2024-11-18 12:10:39.641735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:42.204 [2024-11-18 12:10:39.646198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.204 [2024-11-18 12:10:39.646244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:42.204 [2024-11-18 12:10:39.646259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.467 ms 00:27:42.204 [2024-11-18 12:10:39.646268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.204 [2024-11-18 12:10:39.646302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.204 [2024-11-18 12:10:39.646311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:42.204 [2024-11-18 12:10:39.646322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:42.204 [2024-11-18 12:10:39.646330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.205 [2024-11-18 12:10:39.646379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:42.205 [2024-11-18 12:10:39.646526] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:42.205 [2024-11-18 12:10:39.646543] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:42.205 [2024-11-18 12:10:39.646554] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:42.205 [2024-11-18 12:10:39.646567] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:42.205 [2024-11-18 12:10:39.646576] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:42.205 [2024-11-18 12:10:39.646616] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:42.205 [2024-11-18 12:10:39.646625] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:42.205 [2024-11-18 12:10:39.646637] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:42.205 [2024-11-18 12:10:39.646645] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:42.205 [2024-11-18 12:10:39.646656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.205 [2024-11-18 12:10:39.646664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:42.205 [2024-11-18 12:10:39.646675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:27:42.205 [2024-11-18 12:10:39.646683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.205 [2024-11-18 12:10:39.646770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.205 [2024-11-18 12:10:39.646778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:42.205 [2024-11-18 12:10:39.646789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:42.205 [2024-11-18 12:10:39.646804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.205 [2024-11-18 12:10:39.646907] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:42.205 [2024-11-18 12:10:39.646926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:42.205 [2024-11-18 12:10:39.646937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.205 [2024-11-18 12:10:39.646946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.646956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:42.205 [2024-11-18 12:10:39.646964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.646973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:42.205 [2024-11-18 12:10:39.646980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:42.205 [2024-11-18 12:10:39.646989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:42.205 [2024-11-18 12:10:39.646997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:42.205 [2024-11-18 12:10:39.647012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:42.205 [2024-11-18 12:10:39.647022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:42.205 [2024-11-18 12:10:39.647039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:42.205 [2024-11-18 12:10:39.647046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:42.205 [2024-11-18 12:10:39.647064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:42.205 [2024-11-18 12:10:39.647073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:42.205 [2024-11-18 12:10:39.647090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:42.205 [2024-11-18 12:10:39.647098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.205 [2024-11-18 12:10:39.647107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:42.205 [2024-11-18 12:10:39.647114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:42.205 [2024-11-18 12:10:39.647122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.205 [2024-11-18 12:10:39.647129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:42.205 [2024-11-18 12:10:39.647138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:42.205 [2024-11-18 12:10:39.647145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.205 [2024-11-18 12:10:39.647154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:42.205 [2024-11-18 12:10:39.647161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:42.205 [2024-11-18 12:10:39.647169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.205 [2024-11-18 12:10:39.647176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:42.205 [2024-11-18 12:10:39.647186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:42.205 [2024-11-18 12:10:39.647193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:42.205 [2024-11-18 12:10:39.647208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:42.205 [2024-11-18 12:10:39.647216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:42.205 [2024-11-18 12:10:39.647231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:42.205 [2024-11-18 12:10:39.647252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:42.205 [2024-11-18 12:10:39.647260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647266] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:42.205 [2024-11-18 12:10:39.647280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:42.205 [2024-11-18 12:10:39.647288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.205 [2024-11-18 12:10:39.647297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.205 [2024-11-18 12:10:39.647305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:42.205 [2024-11-18 12:10:39.647317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:42.205 [2024-11-18 12:10:39.647324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:42.205 [2024-11-18 12:10:39.647333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:42.205 [2024-11-18 12:10:39.647340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:42.205 [2024-11-18 12:10:39.647349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:42.205 [2024-11-18 12:10:39.647359] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:42.205 [2024-11-18 12:10:39.647371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:42.205 [2024-11-18 12:10:39.647392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:42.205 [2024-11-18 12:10:39.647417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:42.205 [2024-11-18 12:10:39.647427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:42.205 [2024-11-18 12:10:39.647434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:42.205 [2024-11-18 12:10:39.647444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:42.205 [2024-11-18 12:10:39.647517] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:42.205 [2024-11-18 12:10:39.647529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.205 [2024-11-18 12:10:39.647547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:42.205 [2024-11-18 12:10:39.647554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:42.205 [2024-11-18 12:10:39.647563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:42.205 [2024-11-18 12:10:39.647570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.205 [2024-11-18 12:10:39.647602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:42.205 [2024-11-18 12:10:39.647612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.734 ms 00:27:42.205 [2024-11-18 12:10:39.647622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.206 [2024-11-18 12:10:39.647671] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:42.206 [2024-11-18 12:10:39.647687] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:46.495 [2024-11-18 12:10:43.788556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.495 [2024-11-18 12:10:43.788670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:46.495 [2024-11-18 12:10:43.788689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4140.867 ms 00:27:46.495 [2024-11-18 12:10:43.788701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.495 [2024-11-18 12:10:43.820318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.495 [2024-11-18 12:10:43.820379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:46.495 [2024-11-18 12:10:43.820394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.368 ms 00:27:46.495 [2024-11-18 12:10:43.820404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.495 [2024-11-18 12:10:43.820510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.495 [2024-11-18 12:10:43.820524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:46.495 [2024-11-18 12:10:43.820533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:46.495 [2024-11-18 12:10:43.820550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.495 [2024-11-18 12:10:43.855829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.495 [2024-11-18 12:10:43.855879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:46.495 [2024-11-18 12:10:43.855891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.225 ms 00:27:46.495 [2024-11-18 12:10:43.855901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.495 [2024-11-18 12:10:43.855936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.495 [2024-11-18 12:10:43.855952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:46.495 [2024-11-18 12:10:43.855961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:46.495 [2024-11-18 12:10:43.855972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.495 [2024-11-18 12:10:43.856542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.495 [2024-11-18 12:10:43.856603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:46.495 [2024-11-18 12:10:43.856615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:27:46.496 [2024-11-18 12:10:43.856625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:43.856680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:43.856692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:46.496 [2024-11-18 12:10:43.856704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:46.496 [2024-11-18 12:10:43.856717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:43.873738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:43.873784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:46.496 [2024-11-18 12:10:43.873795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.001 ms 00:27:46.496 [2024-11-18 12:10:43.873805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:43.886873] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:46.496 [2024-11-18 12:10:43.888164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:43.888205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:46.496 [2024-11-18 12:10:43.888218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.272 ms 00:27:46.496 [2024-11-18 12:10:43.888226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:43.925079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:43.925137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:46.496 [2024-11-18 12:10:43.925155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.818 ms 00:27:46.496 [2024-11-18 12:10:43.925165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:43.925271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:43.925285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:46.496 [2024-11-18 12:10:43.925300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:46.496 [2024-11-18 12:10:43.925309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:43.950257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:43.950305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:46.496 [2024-11-18 12:10:43.950320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.894 ms 00:27:46.496 [2024-11-18 12:10:43.950329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:43.975127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:43.975170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:46.496 [2024-11-18 12:10:43.975185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.745 ms 00:27:46.496 [2024-11-18 12:10:43.975192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:43.975854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:43.975881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:46.496 [2024-11-18 12:10:43.975893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:27:46.496 [2024-11-18 12:10:43.975904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:44.055532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:44.055603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:46.496 [2024-11-18 12:10:44.055625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.582 ms 00:27:46.496 [2024-11-18 12:10:44.055634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:44.082614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:44.082660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:46.496 [2024-11-18 12:10:44.082684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.884 ms 00:27:46.496 [2024-11-18 12:10:44.082692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:44.108057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:44.108115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:46.496 [2024-11-18 12:10:44.108130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.312 ms 00:27:46.496 [2024-11-18 12:10:44.108138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:44.133726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:44.133774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:46.496 [2024-11-18 12:10:44.133790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.536 ms 00:27:46.496 [2024-11-18 12:10:44.133798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:44.133851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:44.133862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:46.496 [2024-11-18 12:10:44.133876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:46.496 [2024-11-18 12:10:44.133885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:44.133978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.496 [2024-11-18 12:10:44.133990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:46.496 [2024-11-18 12:10:44.134004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:46.496 [2024-11-18 12:10:44.134012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.496 [2024-11-18 12:10:44.135135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4507.677 ms, result 0 00:27:46.496 { 00:27:46.496 "name": "ftl", 00:27:46.496 "uuid": "e7300cbd-bb91-4fde-a772-b3a343c981cd" 00:27:46.496 } 00:27:46.496 12:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:46.786 [2024-11-18 12:10:44.362321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.786 12:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:47.047 12:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:47.309 [2024-11-18 12:10:44.802798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:47.309 12:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:47.570 [2024-11-18 12:10:45.023928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:47.570 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:47.832 Fill FTL, iteration 1 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80500 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80500 /var/tmp/spdk.tgt.sock 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80500 ']' 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:47.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:47.832 12:10:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.832 [2024-11-18 12:10:45.490389] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:27:47.832 [2024-11-18 12:10:45.491017] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80500 ] 00:27:48.094 [2024-11-18 12:10:45.655045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.355 [2024-11-18 12:10:45.802144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.927 12:10:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:48.927 12:10:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:48.927 12:10:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:49.186 ftln1 00:27:49.186 12:10:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:49.186 12:10:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80500 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80500 ']' 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80500 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80500 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:49.445 killing process with pid 80500 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80500' 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80500 00:27:49.445 12:10:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80500 00:27:50.820 12:10:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:50.820 12:10:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:50.820 [2024-11-18 12:10:48.468884] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:27:50.820 [2024-11-18 12:10:48.468997] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80542 ] 00:27:51.079 [2024-11-18 12:10:48.622241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.079 [2024-11-18 12:10:48.707151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.462  [2024-11-18T12:10:51.104Z] Copying: 228/1024 [MB] (228 MBps) [2024-11-18T12:10:52.048Z] Copying: 483/1024 [MB] (255 MBps) [2024-11-18T12:10:53.434Z] Copying: 720/1024 [MB] (237 MBps) [2024-11-18T12:10:53.695Z] Copying: 933/1024 [MB] (213 MBps) [2024-11-18T12:10:54.638Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:27:56.937 00:27:56.937 Calculate MD5 checksum, iteration 1 00:27:56.937 12:10:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:56.937 12:10:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:56.937 12:10:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:56.938 12:10:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:56.938 12:10:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:56.938 12:10:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:56.938 12:10:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:56.938 12:10:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:56.938 [2024-11-18 12:10:54.387320] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:27:56.938 [2024-11-18 12:10:54.387419] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80609 ] 00:27:56.938 [2024-11-18 12:10:54.538650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.938 [2024-11-18 12:10:54.630474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.315  [2024-11-18T12:10:56.589Z] Copying: 769/1024 [MB] (769 MBps) [2024-11-18T12:10:56.849Z] Copying: 1024/1024 [MB] (average 738 MBps) 00:27:59.148 00:27:59.409 12:10:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:59.409 12:10:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:01.320 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=656ab26b9a139c4c9039ccfc12f4ba2e 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:01.580 Fill FTL, iteration 2 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:01.580 12:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:01.580 [2024-11-18 12:10:59.086935] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:01.580 [2024-11-18 12:10:59.087050] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80659 ] 00:28:01.581 [2024-11-18 12:10:59.246220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.841 [2024-11-18 12:10:59.354690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.222  [2024-11-18T12:11:01.863Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-18T12:11:02.806Z] Copying: 425/1024 [MB] (235 MBps) [2024-11-18T12:11:03.751Z] Copying: 652/1024 [MB] (227 MBps) [2024-11-18T12:11:04.694Z] Copying: 884/1024 [MB] (232 MBps) [2024-11-18T12:11:04.955Z] Copying: 1024/1024 [MB] (average 221 MBps) 00:28:07.254 00:28:07.517 Calculate MD5 checksum, iteration 2 00:28:07.517 12:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:07.517 12:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:07.517 12:11:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:07.517 12:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:07.517 12:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:07.517 12:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:07.517 12:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:07.517 12:11:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:07.517 [2024-11-18 12:11:05.025300] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:07.517 [2024-11-18 12:11:05.025418] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80723 ] 00:28:07.517 [2024-11-18 12:11:05.178370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.779 [2024-11-18 12:11:05.262857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.166  [2024-11-18T12:11:07.440Z] Copying: 669/1024 [MB] (669 MBps) [2024-11-18T12:11:08.382Z] Copying: 1024/1024 [MB] (average 638 MBps) 00:28:10.681 00:28:10.681 12:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:10.681 12:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:13.227 12:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:13.227 12:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d3d31b2c3097da7eb8428af350ed0719 00:28:13.227 12:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:13.227 12:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:13.227 12:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:13.227 [2024-11-18 12:11:10.637220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.227 [2024-11-18 12:11:10.637273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:13.227 [2024-11-18 12:11:10.637286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:13.227 [2024-11-18 12:11:10.637294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.227 [2024-11-18 12:11:10.637312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.227 [2024-11-18 12:11:10.637320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:13.227 [2024-11-18 12:11:10.637329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:13.227 [2024-11-18 12:11:10.637336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.227 [2024-11-18 12:11:10.637352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.227 [2024-11-18 12:11:10.637359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:13.227 [2024-11-18 12:11:10.637366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:13.227 [2024-11-18 12:11:10.637372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.227 [2024-11-18 12:11:10.637423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.199 ms, result 0 00:28:13.227 true 00:28:13.227 12:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:13.227 { 00:28:13.227 "name": "ftl", 00:28:13.227 "properties": [ 00:28:13.227 { 00:28:13.227 "name": "superblock_version", 00:28:13.227 "value": 5, 00:28:13.227 "read-only": true 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "name": "base_device", 00:28:13.227 "bands": [ 00:28:13.227 { 00:28:13.227 "id": 0, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 1, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 2, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 3, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 4, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 5, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 6, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 7, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 8, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 9, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 10, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 11, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 12, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 13, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 14, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 15, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 16, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 17, 00:28:13.227 "state": "FREE", 00:28:13.227 "validity": 0.0 00:28:13.227 } 00:28:13.227 ], 00:28:13.227 "read-only": true 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "name": "cache_device", 00:28:13.227 "type": "bdev", 00:28:13.227 "chunks": [ 00:28:13.227 { 00:28:13.227 "id": 0, 00:28:13.227 "state": "INACTIVE", 00:28:13.227 "utilization": 0.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 1, 00:28:13.227 "state": "CLOSED", 00:28:13.227 "utilization": 1.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 2, 00:28:13.227 "state": "CLOSED", 00:28:13.227 "utilization": 1.0 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 3, 00:28:13.227 "state": "OPEN", 00:28:13.227 "utilization": 0.001953125 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "id": 4, 00:28:13.227 "state": "OPEN", 00:28:13.227 "utilization": 0.0 00:28:13.227 } 00:28:13.227 ], 00:28:13.227 "read-only": true 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "name": "verbose_mode", 00:28:13.227 "value": true, 00:28:13.227 "unit": "", 00:28:13.227 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:13.227 }, 00:28:13.227 { 00:28:13.227 "name": "prep_upgrade_on_shutdown", 00:28:13.227 "value": false, 00:28:13.227 "unit": "", 00:28:13.227 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:13.227 } 00:28:13.227 ] 00:28:13.227 } 00:28:13.227 12:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:13.489 [2024-11-18 12:11:11.053561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.489 [2024-11-18 12:11:11.053615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:13.489 [2024-11-18 12:11:11.053628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:13.489 [2024-11-18 12:11:11.053635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.489 [2024-11-18 12:11:11.053654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.489 [2024-11-18 12:11:11.053660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:13.489 [2024-11-18 12:11:11.053667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:13.489 [2024-11-18 12:11:11.053673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.489 [2024-11-18 12:11:11.053688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.489 [2024-11-18 12:11:11.053694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:13.489 [2024-11-18 12:11:11.053701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:13.489 [2024-11-18 12:11:11.053707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.489 [2024-11-18 12:11:11.053755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.187 ms, result 0 00:28:13.489 true 00:28:13.489 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:13.489 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:13.489 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:13.749 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:13.750 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:13.750 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:14.011 [2024-11-18 12:11:11.463881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.011 [2024-11-18 12:11:11.463916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:14.011 [2024-11-18 12:11:11.463926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:14.011 [2024-11-18 12:11:11.463933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.011 [2024-11-18 12:11:11.463950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.011 [2024-11-18 12:11:11.463957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:14.011 [2024-11-18 12:11:11.463964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:14.011 [2024-11-18 12:11:11.463971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.011 [2024-11-18 12:11:11.463985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.011 [2024-11-18 12:11:11.463991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:14.011 [2024-11-18 12:11:11.463998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:14.011 [2024-11-18 12:11:11.464003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.011 [2024-11-18 12:11:11.464048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:28:14.011 true 00:28:14.011 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:14.011 { 00:28:14.011 "name": "ftl", 00:28:14.011 "properties": [ 00:28:14.011 { 00:28:14.011 "name": "superblock_version", 00:28:14.011 "value": 5, 00:28:14.011 "read-only": true 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "name": "base_device", 00:28:14.011 "bands": [ 00:28:14.011 { 00:28:14.011 "id": 0, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 1, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 2, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 3, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 4, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 5, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 6, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 7, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 8, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 9, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 10, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 11, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 12, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 13, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 14, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 15, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 16, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 17, 00:28:14.011 "state": "FREE", 00:28:14.011 "validity": 0.0 00:28:14.011 } 00:28:14.011 ], 00:28:14.011 "read-only": true 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "name": "cache_device", 00:28:14.011 "type": "bdev", 00:28:14.011 "chunks": [ 00:28:14.011 { 00:28:14.011 "id": 0, 00:28:14.011 "state": "INACTIVE", 00:28:14.011 "utilization": 0.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 1, 00:28:14.011 "state": "CLOSED", 00:28:14.011 "utilization": 1.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 2, 00:28:14.011 "state": "CLOSED", 00:28:14.011 "utilization": 1.0 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 3, 00:28:14.011 "state": "OPEN", 00:28:14.011 "utilization": 0.001953125 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "id": 4, 00:28:14.011 "state": "OPEN", 00:28:14.011 "utilization": 0.0 00:28:14.011 } 00:28:14.011 ], 00:28:14.011 "read-only": true 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "name": "verbose_mode", 00:28:14.011 "value": true, 00:28:14.011 "unit": "", 00:28:14.011 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:14.011 }, 00:28:14.011 { 00:28:14.011 "name": "prep_upgrade_on_shutdown", 00:28:14.011 "value": true, 00:28:14.011 "unit": "", 00:28:14.011 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:14.011 } 00:28:14.011 ] 00:28:14.011 } 00:28:14.011 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:14.011 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80367 ]] 00:28:14.011 12:11:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80367 00:28:14.011 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80367 ']' 00:28:14.012 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80367 00:28:14.012 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:14.012 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:14.012 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80367 00:28:14.273 killing process with pid 80367 00:28:14.273 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:14.273 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:14.273 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80367' 00:28:14.273 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80367 00:28:14.273 12:11:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80367 00:28:14.845 [2024-11-18 12:11:12.294070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:14.845 [2024-11-18 12:11:12.306938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.845 [2024-11-18 12:11:12.306975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:14.845 [2024-11-18 12:11:12.306987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:14.845 [2024-11-18 12:11:12.306994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.845 [2024-11-18 12:11:12.307012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:14.845 [2024-11-18 12:11:12.309243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.845 [2024-11-18 12:11:12.309269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:14.845 [2024-11-18 12:11:12.309278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.220 ms 00:28:14.845 [2024-11-18 12:11:12.309286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.872 [2024-11-18 12:11:20.935456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.872 [2024-11-18 12:11:20.935640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:24.872 [2024-11-18 12:11:20.935655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8626.113 ms 00:28:24.872 [2024-11-18 12:11:20.935667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.872 [2024-11-18 12:11:20.937104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.872 [2024-11-18 12:11:20.937125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:24.872 [2024-11-18 12:11:20.937134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.424 ms 00:28:24.872 [2024-11-18 12:11:20.937142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.872 [2024-11-18 12:11:20.938007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.872 [2024-11-18 12:11:20.938027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:24.872 [2024-11-18 12:11:20.938036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.844 ms 00:28:24.872 [2024-11-18 12:11:20.938047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.872 [2024-11-18 12:11:20.946569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.872 [2024-11-18 12:11:20.946603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:24.872 [2024-11-18 12:11:20.946612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.492 ms 00:28:24.872 [2024-11-18 12:11:20.946619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.872 [2024-11-18 12:11:20.952946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.872 [2024-11-18 12:11:20.952975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:24.873 [2024-11-18 12:11:20.952984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.301 ms 00:28:24.873 [2024-11-18 12:11:20.952991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:20.953056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.873 [2024-11-18 12:11:20.953065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:24.873 [2024-11-18 12:11:20.953076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:24.873 [2024-11-18 12:11:20.953083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:20.961014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.873 [2024-11-18 12:11:20.961041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:24.873 [2024-11-18 12:11:20.961049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.920 ms 00:28:24.873 [2024-11-18 12:11:20.961055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:20.969227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.873 [2024-11-18 12:11:20.969253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:24.873 [2024-11-18 12:11:20.969261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.143 ms 00:28:24.873 [2024-11-18 12:11:20.969267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:20.976915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.873 [2024-11-18 12:11:20.976939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:24.873 [2024-11-18 12:11:20.976946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.622 ms 00:28:24.873 [2024-11-18 12:11:20.976952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:20.984357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.873 [2024-11-18 12:11:20.984382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:24.873 [2024-11-18 12:11:20.984390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.355 ms 00:28:24.873 [2024-11-18 12:11:20.984395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:20.984420] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:24.873 [2024-11-18 12:11:20.984431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:24.873 [2024-11-18 12:11:20.984439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:24.873 [2024-11-18 12:11:20.984453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:24.873 [2024-11-18 12:11:20.984460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:24.873 [2024-11-18 12:11:20.984554] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:24.873 [2024-11-18 12:11:20.984560] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e7300cbd-bb91-4fde-a772-b3a343c981cd 00:28:24.873 [2024-11-18 12:11:20.984566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:24.873 [2024-11-18 12:11:20.984572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:24.873 [2024-11-18 12:11:20.984577] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:24.873 [2024-11-18 12:11:20.984592] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:24.873 [2024-11-18 12:11:20.984598] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:24.873 [2024-11-18 12:11:20.984610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:24.873 [2024-11-18 12:11:20.984616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:24.873 [2024-11-18 12:11:20.984621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:24.873 [2024-11-18 12:11:20.984627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:24.873 [2024-11-18 12:11:20.984639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.873 [2024-11-18 12:11:20.984645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:24.873 [2024-11-18 12:11:20.984652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:28:24.873 [2024-11-18 12:11:20.984657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:20.994788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.873 [2024-11-18 12:11:20.994812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:24.873 [2024-11-18 12:11:20.994820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.118 ms 00:28:24.873 [2024-11-18 12:11:20.994830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:20.995119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.873 [2024-11-18 12:11:20.995127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:24.873 [2024-11-18 12:11:20.995134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:28:24.873 [2024-11-18 12:11:20.995139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.030016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.030044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:24.873 [2024-11-18 12:11:21.030056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.030063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.030090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.030097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:24.873 [2024-11-18 12:11:21.030103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.030110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.030162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.030170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:24.873 [2024-11-18 12:11:21.030177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.030186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.030199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.030206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:24.873 [2024-11-18 12:11:21.030212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.030219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.093246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.093282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:24.873 [2024-11-18 12:11:21.093298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.093304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.143931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.143970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:24.873 [2024-11-18 12:11:21.143981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.143987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.144068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.144076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:24.873 [2024-11-18 12:11:21.144083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.144089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.144128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.144136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:24.873 [2024-11-18 12:11:21.144144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.144150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.144224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.144232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:24.873 [2024-11-18 12:11:21.144238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.144244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.144270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.144280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:24.873 [2024-11-18 12:11:21.144287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.144294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.873 [2024-11-18 12:11:21.144328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.873 [2024-11-18 12:11:21.144335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:24.873 [2024-11-18 12:11:21.144342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.873 [2024-11-18 12:11:21.144348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.874 [2024-11-18 12:11:21.144390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:24.874 [2024-11-18 12:11:21.144398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:24.874 [2024-11-18 12:11:21.144404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:24.874 [2024-11-18 12:11:21.144410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.874 [2024-11-18 12:11:21.144520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8837.533 ms, result 0 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80924 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80924 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80924 ']' 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:28.220 12:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:28.220 [2024-11-18 12:11:25.536320] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:28.220 [2024-11-18 12:11:25.536425] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80924 ] 00:28:28.220 [2024-11-18 12:11:25.678859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.220 [2024-11-18 12:11:25.772818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.805 [2024-11-18 12:11:26.400274] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:28.806 [2024-11-18 12:11:26.400333] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:29.067 [2024-11-18 12:11:26.549032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.549069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:29.067 [2024-11-18 12:11:26.549081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:29.067 [2024-11-18 12:11:26.549088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.549130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.549138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:29.067 [2024-11-18 12:11:26.549145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:29.067 [2024-11-18 12:11:26.549151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.549169] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:29.067 [2024-11-18 12:11:26.549698] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:29.067 [2024-11-18 12:11:26.549718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.549724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:29.067 [2024-11-18 12:11:26.549731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.555 ms 00:28:29.067 [2024-11-18 12:11:26.549737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.551030] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:29.067 [2024-11-18 12:11:26.561862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.561892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:29.067 [2024-11-18 12:11:26.561905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.834 ms 00:28:29.067 [2024-11-18 12:11:26.561912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.561959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.561967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:29.067 [2024-11-18 12:11:26.561974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:29.067 [2024-11-18 12:11:26.561980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.568388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.568412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:29.067 [2024-11-18 12:11:26.568420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.362 ms 00:28:29.067 [2024-11-18 12:11:26.568425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.568471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.568479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:29.067 [2024-11-18 12:11:26.568485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:29.067 [2024-11-18 12:11:26.568491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.568536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.568547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:29.067 [2024-11-18 12:11:26.568555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:29.067 [2024-11-18 12:11:26.568562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.568593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:29.067 [2024-11-18 12:11:26.571664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.571691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:29.067 [2024-11-18 12:11:26.571698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.090 ms 00:28:29.067 [2024-11-18 12:11:26.571704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.571727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.571733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:29.067 [2024-11-18 12:11:26.571739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:29.067 [2024-11-18 12:11:26.571745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.571762] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:29.067 [2024-11-18 12:11:26.571781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:29.067 [2024-11-18 12:11:26.571809] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:29.067 [2024-11-18 12:11:26.571821] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:29.067 [2024-11-18 12:11:26.571904] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:29.067 [2024-11-18 12:11:26.571914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:29.067 [2024-11-18 12:11:26.571922] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:29.067 [2024-11-18 12:11:26.571929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:29.067 [2024-11-18 12:11:26.571936] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:29.067 [2024-11-18 12:11:26.571945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:29.067 [2024-11-18 12:11:26.571951] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:29.067 [2024-11-18 12:11:26.571957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:29.067 [2024-11-18 12:11:26.571964] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:29.067 [2024-11-18 12:11:26.571970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.571976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:29.067 [2024-11-18 12:11:26.571982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:28:29.067 [2024-11-18 12:11:26.571987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.572052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.067 [2024-11-18 12:11:26.572059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:29.067 [2024-11-18 12:11:26.572066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:29.067 [2024-11-18 12:11:26.572073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.067 [2024-11-18 12:11:26.572150] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:29.067 [2024-11-18 12:11:26.572158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:29.067 [2024-11-18 12:11:26.572164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:29.067 [2024-11-18 12:11:26.572170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.067 [2024-11-18 12:11:26.572176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:29.067 [2024-11-18 12:11:26.572181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:29.067 [2024-11-18 12:11:26.572191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:29.067 [2024-11-18 12:11:26.572197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:29.067 [2024-11-18 12:11:26.572202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:29.067 [2024-11-18 12:11:26.572207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.067 [2024-11-18 12:11:26.572213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:29.067 [2024-11-18 12:11:26.572218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:29.067 [2024-11-18 12:11:26.572223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.067 [2024-11-18 12:11:26.572228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:29.067 [2024-11-18 12:11:26.572233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:29.067 [2024-11-18 12:11:26.572238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.067 [2024-11-18 12:11:26.572243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:29.067 [2024-11-18 12:11:26.572248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:29.067 [2024-11-18 12:11:26.572253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.067 [2024-11-18 12:11:26.572258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:29.068 [2024-11-18 12:11:26.572263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:29.068 [2024-11-18 12:11:26.572268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:29.068 [2024-11-18 12:11:26.572272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:29.068 [2024-11-18 12:11:26.572277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:29.068 [2024-11-18 12:11:26.572282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:29.068 [2024-11-18 12:11:26.572291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:29.068 [2024-11-18 12:11:26.572296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:29.068 [2024-11-18 12:11:26.572301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:29.068 [2024-11-18 12:11:26.572306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:29.068 [2024-11-18 12:11:26.572310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:29.068 [2024-11-18 12:11:26.572315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:29.068 [2024-11-18 12:11:26.572320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:29.068 [2024-11-18 12:11:26.572325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:29.068 [2024-11-18 12:11:26.572329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.068 [2024-11-18 12:11:26.572334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:29.068 [2024-11-18 12:11:26.572340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:29.068 [2024-11-18 12:11:26.572344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.068 [2024-11-18 12:11:26.572349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:29.068 [2024-11-18 12:11:26.572359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:29.068 [2024-11-18 12:11:26.572363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.068 [2024-11-18 12:11:26.572369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:29.068 [2024-11-18 12:11:26.572374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:29.068 [2024-11-18 12:11:26.572378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.068 [2024-11-18 12:11:26.572383] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:29.068 [2024-11-18 12:11:26.572389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:29.068 [2024-11-18 12:11:26.572395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:29.068 [2024-11-18 12:11:26.572400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:29.068 [2024-11-18 12:11:26.572407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:29.068 [2024-11-18 12:11:26.572412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:29.068 [2024-11-18 12:11:26.572417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:29.068 [2024-11-18 12:11:26.572422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:29.068 [2024-11-18 12:11:26.572427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:29.068 [2024-11-18 12:11:26.572432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:29.068 [2024-11-18 12:11:26.572439] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:29.068 [2024-11-18 12:11:26.572445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:29.068 [2024-11-18 12:11:26.572457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:29.068 [2024-11-18 12:11:26.572474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:29.068 [2024-11-18 12:11:26.572480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:29.068 [2024-11-18 12:11:26.572492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:29.068 [2024-11-18 12:11:26.572498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:29.068 [2024-11-18 12:11:26.572535] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:29.068 [2024-11-18 12:11:26.572545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:29.068 [2024-11-18 12:11:26.572557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:29.068 [2024-11-18 12:11:26.572563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:29.068 [2024-11-18 12:11:26.572568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:29.068 [2024-11-18 12:11:26.572574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.068 [2024-11-18 12:11:26.572580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:29.068 [2024-11-18 12:11:26.572596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.477 ms 00:28:29.068 [2024-11-18 12:11:26.572602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.068 [2024-11-18 12:11:26.572646] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:29.068 [2024-11-18 12:11:26.572654] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:33.273 [2024-11-18 12:11:30.218018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.218083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:33.273 [2024-11-18 12:11:30.218097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3645.356 ms 00:28:33.273 [2024-11-18 12:11:30.218105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.241708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.241748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:33.273 [2024-11-18 12:11:30.241759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.423 ms 00:28:33.273 [2024-11-18 12:11:30.241766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.241842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.241854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:33.273 [2024-11-18 12:11:30.241862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:33.273 [2024-11-18 12:11:30.241868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.268471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.268506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:33.273 [2024-11-18 12:11:30.268515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.556 ms 00:28:33.273 [2024-11-18 12:11:30.268524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.268553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.268560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:33.273 [2024-11-18 12:11:30.268567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:33.273 [2024-11-18 12:11:30.268573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.268990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.269012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:33.273 [2024-11-18 12:11:30.269020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:28:33.273 [2024-11-18 12:11:30.269027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.269065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.269072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:33.273 [2024-11-18 12:11:30.269079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:33.273 [2024-11-18 12:11:30.269087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.282487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.282515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:33.273 [2024-11-18 12:11:30.282523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.383 ms 00:28:33.273 [2024-11-18 12:11:30.282530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.293304] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:33.273 [2024-11-18 12:11:30.293334] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:33.273 [2024-11-18 12:11:30.293345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.293352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:33.273 [2024-11-18 12:11:30.293359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.719 ms 00:28:33.273 [2024-11-18 12:11:30.293365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.304065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.304092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:33.273 [2024-11-18 12:11:30.304101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.668 ms 00:28:33.273 [2024-11-18 12:11:30.304109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.313160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.313186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:33.273 [2024-11-18 12:11:30.313193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.017 ms 00:28:33.273 [2024-11-18 12:11:30.313200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.322102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.322127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:33.273 [2024-11-18 12:11:30.322136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.872 ms 00:28:33.273 [2024-11-18 12:11:30.322142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.322617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.322634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:33.273 [2024-11-18 12:11:30.322642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:28:33.273 [2024-11-18 12:11:30.322648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.384695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.384730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:33.273 [2024-11-18 12:11:30.384741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 62.031 ms 00:28:33.273 [2024-11-18 12:11:30.384749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.392862] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:33.273 [2024-11-18 12:11:30.393596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.393620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:33.273 [2024-11-18 12:11:30.393628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.481 ms 00:28:33.273 [2024-11-18 12:11:30.393635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.393696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.393709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:33.273 [2024-11-18 12:11:30.393716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:33.273 [2024-11-18 12:11:30.393722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.393760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.393769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:33.273 [2024-11-18 12:11:30.393776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:33.273 [2024-11-18 12:11:30.393783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.393801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.393809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:33.273 [2024-11-18 12:11:30.393818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:33.273 [2024-11-18 12:11:30.393825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.273 [2024-11-18 12:11:30.393854] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:33.273 [2024-11-18 12:11:30.393862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.273 [2024-11-18 12:11:30.393868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:33.273 [2024-11-18 12:11:30.393875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:33.274 [2024-11-18 12:11:30.393881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.274 [2024-11-18 12:11:30.411629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.274 [2024-11-18 12:11:30.411660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:33.274 [2024-11-18 12:11:30.411669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.733 ms 00:28:33.274 [2024-11-18 12:11:30.411675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.274 [2024-11-18 12:11:30.411734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.274 [2024-11-18 12:11:30.411743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:33.274 [2024-11-18 12:11:30.411750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:33.274 [2024-11-18 12:11:30.411756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.274 [2024-11-18 12:11:30.412719] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3863.274 ms, result 0 00:28:33.274 [2024-11-18 12:11:30.427930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.274 [2024-11-18 12:11:30.443924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:33.274 [2024-11-18 12:11:30.452072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:33.274 [2024-11-18 12:11:30.680061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.274 [2024-11-18 12:11:30.680094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:33.274 [2024-11-18 12:11:30.680105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:33.274 [2024-11-18 12:11:30.680115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.274 [2024-11-18 12:11:30.680132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.274 [2024-11-18 12:11:30.680139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:33.274 [2024-11-18 12:11:30.680146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:33.274 [2024-11-18 12:11:30.680152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.274 [2024-11-18 12:11:30.680167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.274 [2024-11-18 12:11:30.680174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:33.274 [2024-11-18 12:11:30.680180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:33.274 [2024-11-18 12:11:30.680186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.274 [2024-11-18 12:11:30.680233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.164 ms, result 0 00:28:33.274 true 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:33.274 { 00:28:33.274 "name": "ftl", 00:28:33.274 "properties": [ 00:28:33.274 { 00:28:33.274 "name": "superblock_version", 00:28:33.274 "value": 5, 00:28:33.274 "read-only": true 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "name": "base_device", 00:28:33.274 "bands": [ 00:28:33.274 { 00:28:33.274 "id": 0, 00:28:33.274 "state": "CLOSED", 00:28:33.274 "validity": 1.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 1, 00:28:33.274 "state": "CLOSED", 00:28:33.274 "validity": 1.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 2, 00:28:33.274 "state": "CLOSED", 00:28:33.274 "validity": 0.007843137254901933 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 3, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 4, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 5, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 6, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 7, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 8, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 9, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 10, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 11, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 12, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 13, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 14, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 15, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 16, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 17, 00:28:33.274 "state": "FREE", 00:28:33.274 "validity": 0.0 00:28:33.274 } 00:28:33.274 ], 00:28:33.274 "read-only": true 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "name": "cache_device", 00:28:33.274 "type": "bdev", 00:28:33.274 "chunks": [ 00:28:33.274 { 00:28:33.274 "id": 0, 00:28:33.274 "state": "INACTIVE", 00:28:33.274 "utilization": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 1, 00:28:33.274 "state": "OPEN", 00:28:33.274 "utilization": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 2, 00:28:33.274 "state": "OPEN", 00:28:33.274 "utilization": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 3, 00:28:33.274 "state": "FREE", 00:28:33.274 "utilization": 0.0 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "id": 4, 00:28:33.274 "state": "FREE", 00:28:33.274 "utilization": 0.0 00:28:33.274 } 00:28:33.274 ], 00:28:33.274 "read-only": true 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "name": "verbose_mode", 00:28:33.274 "value": true, 00:28:33.274 "unit": "", 00:28:33.274 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:33.274 }, 00:28:33.274 { 00:28:33.274 "name": "prep_upgrade_on_shutdown", 00:28:33.274 "value": false, 00:28:33.274 "unit": "", 00:28:33.274 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:33.274 } 00:28:33.274 ] 00:28:33.274 } 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:33.274 12:11:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:33.536 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:33.536 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:33.536 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:33.536 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:33.536 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:33.797 Validate MD5 checksum, iteration 1 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:33.797 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:33.798 12:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:33.798 [2024-11-18 12:11:31.418857] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:33.798 [2024-11-18 12:11:31.418976] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81004 ] 00:28:34.059 [2024-11-18 12:11:31.578945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.059 [2024-11-18 12:11:31.673984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.971  [2024-11-18T12:11:34.238Z] Copying: 515/1024 [MB] (515 MBps) [2024-11-18T12:11:35.174Z] Copying: 1024/1024 [MB] (average 574 MBps) 00:28:37.473 00:28:37.473 12:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:37.473 12:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:39.413 Validate MD5 checksum, iteration 2 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=656ab26b9a139c4c9039ccfc12f4ba2e 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 656ab26b9a139c4c9039ccfc12f4ba2e != \6\5\6\a\b\2\6\b\9\a\1\3\9\c\4\c\9\0\3\9\c\c\f\c\1\2\f\4\b\a\2\e ]] 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:39.413 12:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:39.672 [2024-11-18 12:11:37.121772] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:39.673 [2024-11-18 12:11:37.121889] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81065 ] 00:28:39.673 [2024-11-18 12:11:37.277139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.673 [2024-11-18 12:11:37.352928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.587  [2024-11-18T12:11:39.550Z] Copying: 666/1024 [MB] (666 MBps) [2024-11-18T12:11:40.118Z] Copying: 1024/1024 [MB] (average 653 MBps) 00:28:42.417 00:28:42.679 12:11:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:42.679 12:11:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d3d31b2c3097da7eb8428af350ed0719 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d3d31b2c3097da7eb8428af350ed0719 != \d\3\d\3\1\b\2\c\3\0\9\7\d\a\7\e\b\8\4\2\8\a\f\3\5\0\e\d\0\7\1\9 ]] 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80924 ]] 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80924 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81121 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81121 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81121 ']' 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:44.595 12:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.595 [2024-11-18 12:11:42.191604] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:44.595 [2024-11-18 12:11:42.191717] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81121 ] 00:28:44.856 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 80924 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:44.856 [2024-11-18 12:11:42.345801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.856 [2024-11-18 12:11:42.436940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.427 [2024-11-18 12:11:43.064926] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:45.427 [2024-11-18 12:11:43.064984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:45.690 [2024-11-18 12:11:43.213777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.690 [2024-11-18 12:11:43.213809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:45.690 [2024-11-18 12:11:43.213821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:45.690 [2024-11-18 12:11:43.213828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.690 [2024-11-18 12:11:43.213872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.690 [2024-11-18 12:11:43.213880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:45.690 [2024-11-18 12:11:43.213887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:45.690 [2024-11-18 12:11:43.213893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.690 [2024-11-18 12:11:43.213911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:45.690 [2024-11-18 12:11:43.214436] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:45.690 [2024-11-18 12:11:43.214449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.690 [2024-11-18 12:11:43.214455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:45.690 [2024-11-18 12:11:43.214462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:28:45.690 [2024-11-18 12:11:43.214468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.690 [2024-11-18 12:11:43.214725] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:45.690 [2024-11-18 12:11:43.228484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.690 [2024-11-18 12:11:43.228510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:45.690 [2024-11-18 12:11:43.228521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.760 ms 00:28:45.690 [2024-11-18 12:11:43.228528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.690 [2024-11-18 12:11:43.235553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.690 [2024-11-18 12:11:43.235578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:45.690 [2024-11-18 12:11:43.235599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:45.690 [2024-11-18 12:11:43.235605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.690 [2024-11-18 12:11:43.235856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.690 [2024-11-18 12:11:43.235864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:45.690 [2024-11-18 12:11:43.235871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:28:45.690 [2024-11-18 12:11:43.235877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.690 [2024-11-18 12:11:43.235918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.690 [2024-11-18 12:11:43.235925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:45.690 [2024-11-18 12:11:43.235931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:45.690 [2024-11-18 12:11:43.235937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.690 [2024-11-18 12:11:43.235956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.690 [2024-11-18 12:11:43.235963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:45.690 [2024-11-18 12:11:43.235968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:45.690 [2024-11-18 12:11:43.235974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.691 [2024-11-18 12:11:43.235990] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:45.691 [2024-11-18 12:11:43.238449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.691 [2024-11-18 12:11:43.238470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:45.691 [2024-11-18 12:11:43.238477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.463 ms 00:28:45.691 [2024-11-18 12:11:43.238482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.691 [2024-11-18 12:11:43.238505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.691 [2024-11-18 12:11:43.238511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:45.691 [2024-11-18 12:11:43.238517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:45.691 [2024-11-18 12:11:43.238523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.691 [2024-11-18 12:11:43.238540] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:45.691 [2024-11-18 12:11:43.238557] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:45.691 [2024-11-18 12:11:43.238596] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:45.691 [2024-11-18 12:11:43.238611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:45.691 [2024-11-18 12:11:43.238695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:45.691 [2024-11-18 12:11:43.238703] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:45.691 [2024-11-18 12:11:43.238711] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:45.691 [2024-11-18 12:11:43.238719] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:45.691 [2024-11-18 12:11:43.238726] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:45.691 [2024-11-18 12:11:43.238732] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:45.691 [2024-11-18 12:11:43.238738] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:45.691 [2024-11-18 12:11:43.238744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:45.691 [2024-11-18 12:11:43.238749] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:45.691 [2024-11-18 12:11:43.238757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.691 [2024-11-18 12:11:43.238762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:45.691 [2024-11-18 12:11:43.238768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:28:45.691 [2024-11-18 12:11:43.238774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.691 [2024-11-18 12:11:43.238839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.691 [2024-11-18 12:11:43.238845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:45.691 [2024-11-18 12:11:43.238850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:45.691 [2024-11-18 12:11:43.238856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.691 [2024-11-18 12:11:43.238932] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:45.691 [2024-11-18 12:11:43.238940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:45.691 [2024-11-18 12:11:43.238948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:45.691 [2024-11-18 12:11:43.238954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.238960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:45.691 [2024-11-18 12:11:43.238966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.238972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:45.691 [2024-11-18 12:11:43.238977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:45.691 [2024-11-18 12:11:43.238982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:45.691 [2024-11-18 12:11:43.238988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.238993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:45.691 [2024-11-18 12:11:43.238998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:45.691 [2024-11-18 12:11:43.239004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:45.691 [2024-11-18 12:11:43.239014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:45.691 [2024-11-18 12:11:43.239018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:45.691 [2024-11-18 12:11:43.239028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:45.691 [2024-11-18 12:11:43.239036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:45.691 [2024-11-18 12:11:43.239047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:45.691 [2024-11-18 12:11:43.239052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:45.691 [2024-11-18 12:11:43.239057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:45.691 [2024-11-18 12:11:43.239067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:45.691 [2024-11-18 12:11:43.239072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:45.691 [2024-11-18 12:11:43.239077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:45.691 [2024-11-18 12:11:43.239082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:45.691 [2024-11-18 12:11:43.239087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:45.691 [2024-11-18 12:11:43.239092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:45.691 [2024-11-18 12:11:43.239097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:45.691 [2024-11-18 12:11:43.239102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:45.691 [2024-11-18 12:11:43.239107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:45.691 [2024-11-18 12:11:43.239111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:45.691 [2024-11-18 12:11:43.239117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:45.691 [2024-11-18 12:11:43.239127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:45.691 [2024-11-18 12:11:43.239133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:45.691 [2024-11-18 12:11:43.239142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:45.691 [2024-11-18 12:11:43.239157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:45.691 [2024-11-18 12:11:43.239162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239167] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:45.691 [2024-11-18 12:11:43.239174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:45.691 [2024-11-18 12:11:43.239180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:45.691 [2024-11-18 12:11:43.239185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.691 [2024-11-18 12:11:43.239191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:45.691 [2024-11-18 12:11:43.239196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:45.691 [2024-11-18 12:11:43.239201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:45.691 [2024-11-18 12:11:43.239207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:45.691 [2024-11-18 12:11:43.239213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:45.691 [2024-11-18 12:11:43.239218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:45.691 [2024-11-18 12:11:43.239224] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:45.691 [2024-11-18 12:11:43.239231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.691 [2024-11-18 12:11:43.239238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:45.691 [2024-11-18 12:11:43.239244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:45.691 [2024-11-18 12:11:43.239249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:45.691 [2024-11-18 12:11:43.239254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:45.691 [2024-11-18 12:11:43.239260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:45.691 [2024-11-18 12:11:43.239266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:45.691 [2024-11-18 12:11:43.239271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:45.691 [2024-11-18 12:11:43.239276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:45.691 [2024-11-18 12:11:43.239281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:45.691 [2024-11-18 12:11:43.239286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:45.691 [2024-11-18 12:11:43.239292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:45.691 [2024-11-18 12:11:43.239297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:45.691 [2024-11-18 12:11:43.239302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:45.692 [2024-11-18 12:11:43.239308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:45.692 [2024-11-18 12:11:43.239313] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:45.692 [2024-11-18 12:11:43.239319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.692 [2024-11-18 12:11:43.239328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:45.692 [2024-11-18 12:11:43.239334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:45.692 [2024-11-18 12:11:43.239339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:45.692 [2024-11-18 12:11:43.239345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:45.692 [2024-11-18 12:11:43.239350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.239356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:45.692 [2024-11-18 12:11:43.239362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:28:45.692 [2024-11-18 12:11:43.239367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.261172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.261198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:45.692 [2024-11-18 12:11:43.261207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.767 ms 00:28:45.692 [2024-11-18 12:11:43.261213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.261246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.261252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:45.692 [2024-11-18 12:11:43.261259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:45.692 [2024-11-18 12:11:43.261265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.287769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.287794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:45.692 [2024-11-18 12:11:43.287802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.462 ms 00:28:45.692 [2024-11-18 12:11:43.287808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.287831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.287837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:45.692 [2024-11-18 12:11:43.287844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:45.692 [2024-11-18 12:11:43.287852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.287924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.287932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:45.692 [2024-11-18 12:11:43.287938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:45.692 [2024-11-18 12:11:43.287945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.287979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.287985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:45.692 [2024-11-18 12:11:43.287992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:45.692 [2024-11-18 12:11:43.287998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.301230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.301252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:45.692 [2024-11-18 12:11:43.301260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.211 ms 00:28:45.692 [2024-11-18 12:11:43.301266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.301350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.301358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:45.692 [2024-11-18 12:11:43.301365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:45.692 [2024-11-18 12:11:43.301371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.331377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.331410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:45.692 [2024-11-18 12:11:43.331421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.991 ms 00:28:45.692 [2024-11-18 12:11:43.331435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.692 [2024-11-18 12:11:43.338777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.692 [2024-11-18 12:11:43.338799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:45.692 [2024-11-18 12:11:43.338809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.406 ms 00:28:45.692 [2024-11-18 12:11:43.338815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.954 [2024-11-18 12:11:43.387525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.954 [2024-11-18 12:11:43.387566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:45.954 [2024-11-18 12:11:43.387576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.665 ms 00:28:45.954 [2024-11-18 12:11:43.387591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.954 [2024-11-18 12:11:43.387720] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:45.954 [2024-11-18 12:11:43.387822] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:45.954 [2024-11-18 12:11:43.387922] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:45.954 [2024-11-18 12:11:43.388023] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:45.954 [2024-11-18 12:11:43.388031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.954 [2024-11-18 12:11:43.388039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:45.954 [2024-11-18 12:11:43.388046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:28:45.954 [2024-11-18 12:11:43.388052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.954 [2024-11-18 12:11:43.388099] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:45.954 [2024-11-18 12:11:43.388108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.954 [2024-11-18 12:11:43.388118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:45.954 [2024-11-18 12:11:43.388125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:45.954 [2024-11-18 12:11:43.388131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.954 [2024-11-18 12:11:43.400660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.954 [2024-11-18 12:11:43.400689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:45.954 [2024-11-18 12:11:43.400698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.512 ms 00:28:45.954 [2024-11-18 12:11:43.400705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.954 [2024-11-18 12:11:43.407262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.954 [2024-11-18 12:11:43.407285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:45.954 [2024-11-18 12:11:43.407293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:45.954 [2024-11-18 12:11:43.407300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.954 [2024-11-18 12:11:43.407366] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:45.954 [2024-11-18 12:11:43.407538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.954 [2024-11-18 12:11:43.407547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:45.954 [2024-11-18 12:11:43.407555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:28:45.954 [2024-11-18 12:11:43.407561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.526 [2024-11-18 12:11:44.154694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.527 [2024-11-18 12:11:44.154730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:46.527 [2024-11-18 12:11:44.154741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 746.519 ms 00:28:46.527 [2024-11-18 12:11:44.154748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.527 [2024-11-18 12:11:44.158126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.527 [2024-11-18 12:11:44.158151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:46.527 [2024-11-18 12:11:44.158158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.175 ms 00:28:46.527 [2024-11-18 12:11:44.158165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.527 [2024-11-18 12:11:44.158556] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:46.527 [2024-11-18 12:11:44.158574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.527 [2024-11-18 12:11:44.158591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:46.527 [2024-11-18 12:11:44.158599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.384 ms 00:28:46.527 [2024-11-18 12:11:44.158605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.527 [2024-11-18 12:11:44.158631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.527 [2024-11-18 12:11:44.158639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:46.527 [2024-11-18 12:11:44.158645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:46.527 [2024-11-18 12:11:44.158655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.527 [2024-11-18 12:11:44.158681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 751.313 ms, result 0 00:28:46.527 [2024-11-18 12:11:44.158709] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:46.527 [2024-11-18 12:11:44.158868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.527 [2024-11-18 12:11:44.158877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:46.527 [2024-11-18 12:11:44.158883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:28:46.527 [2024-11-18 12:11:44.158889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.974391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.974444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:47.469 [2024-11-18 12:11:44.974460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 814.785 ms 00:28:47.469 [2024-11-18 12:11:44.974468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.978825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.978856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:47.469 [2024-11-18 12:11:44.978866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.381 ms 00:28:47.469 [2024-11-18 12:11:44.978874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.979763] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:47.469 [2024-11-18 12:11:44.979785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.979793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:47.469 [2024-11-18 12:11:44.979802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.885 ms 00:28:47.469 [2024-11-18 12:11:44.979810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.979841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.979850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:47.469 [2024-11-18 12:11:44.979857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:47.469 [2024-11-18 12:11:44.979865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.979900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 821.179 ms, result 0 00:28:47.469 [2024-11-18 12:11:44.979943] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:47.469 [2024-11-18 12:11:44.979953] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:47.469 [2024-11-18 12:11:44.979962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.979971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:47.469 [2024-11-18 12:11:44.979979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1572.608 ms 00:28:47.469 [2024-11-18 12:11:44.979987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.980029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.980042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:47.469 [2024-11-18 12:11:44.980050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:47.469 [2024-11-18 12:11:44.980058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.992088] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:47.469 [2024-11-18 12:11:44.992197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.992206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:47.469 [2024-11-18 12:11:44.992216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.123 ms 00:28:47.469 [2024-11-18 12:11:44.992224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.992934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.992952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:47.469 [2024-11-18 12:11:44.992965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.646 ms 00:28:47.469 [2024-11-18 12:11:44.992972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.995191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.995209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:47.469 [2024-11-18 12:11:44.995219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.203 ms 00:28:47.469 [2024-11-18 12:11:44.995227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.995263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.995271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:47.469 [2024-11-18 12:11:44.995280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:47.469 [2024-11-18 12:11:44.995290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.995397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.995407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:47.469 [2024-11-18 12:11:44.995415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:47.469 [2024-11-18 12:11:44.995422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.995463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.995471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:47.469 [2024-11-18 12:11:44.995479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:47.469 [2024-11-18 12:11:44.995486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.995521] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:47.469 [2024-11-18 12:11:44.995531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.995538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:47.469 [2024-11-18 12:11:44.995545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:47.469 [2024-11-18 12:11:44.995553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.995617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.469 [2024-11-18 12:11:44.995626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:47.469 [2024-11-18 12:11:44.995634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:47.469 [2024-11-18 12:11:44.995644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.469 [2024-11-18 12:11:44.996670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1782.396 ms, result 0 00:28:47.469 [2024-11-18 12:11:45.009366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.469 [2024-11-18 12:11:45.025350] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:47.470 [2024-11-18 12:11:45.034125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:47.470 Validate MD5 checksum, iteration 1 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:47.470 12:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:47.470 [2024-11-18 12:11:45.152844] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:47.470 [2024-11-18 12:11:45.152982] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81161 ] 00:28:47.731 [2024-11-18 12:11:45.310756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.731 [2024-11-18 12:11:45.404440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.640  [2024-11-18T12:11:47.908Z] Copying: 570/1024 [MB] (570 MBps) [2024-11-18T12:11:48.843Z] Copying: 1024/1024 [MB] (average 598 MBps) 00:28:51.142 00:28:51.142 12:11:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:51.142 12:11:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:53.135 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:53.136 Validate MD5 checksum, iteration 2 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=656ab26b9a139c4c9039ccfc12f4ba2e 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 656ab26b9a139c4c9039ccfc12f4ba2e != \6\5\6\a\b\2\6\b\9\a\1\3\9\c\4\c\9\0\3\9\c\c\f\c\1\2\f\4\b\a\2\e ]] 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:53.136 12:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:53.136 [2024-11-18 12:11:50.738297] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:28:53.136 [2024-11-18 12:11:50.738391] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81225 ] 00:28:53.417 [2024-11-18 12:11:50.888724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.417 [2024-11-18 12:11:50.968652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.799  [2024-11-18T12:11:53.068Z] Copying: 648/1024 [MB] (648 MBps) [2024-11-18T12:11:57.278Z] Copying: 1024/1024 [MB] (average 649 MBps) 00:28:59.577 00:28:59.577 12:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:59.577 12:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d3d31b2c3097da7eb8428af350ed0719 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d3d31b2c3097da7eb8428af350ed0719 != \d\3\d\3\1\b\2\c\3\0\9\7\d\a\7\e\b\8\4\2\8\a\f\3\5\0\e\d\0\7\1\9 ]] 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81121 ]] 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81121 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81121 ']' 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81121 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81121 00:29:02.128 killing process with pid 81121 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81121' 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81121 00:29:02.128 12:11:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81121 00:29:02.390 [2024-11-18 12:11:59.945823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:02.390 [2024-11-18 12:11:59.958934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.958971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:02.390 [2024-11-18 12:11:59.958982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:02.390 [2024-11-18 12:11:59.958989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.959009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:02.390 [2024-11-18 12:11:59.961243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.961269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:02.390 [2024-11-18 12:11:59.961282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.223 ms 00:29:02.390 [2024-11-18 12:11:59.961288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.961483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.961492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:02.390 [2024-11-18 12:11:59.961499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:29:02.390 [2024-11-18 12:11:59.961505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.963007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.963031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:02.390 [2024-11-18 12:11:59.963039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 00:29:02.390 [2024-11-18 12:11:59.963045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.963936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.963950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:02.390 [2024-11-18 12:11:59.963958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.859 ms 00:29:02.390 [2024-11-18 12:11:59.963964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.972619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.972645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:02.390 [2024-11-18 12:11:59.972653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.629 ms 00:29:02.390 [2024-11-18 12:11:59.972663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.977132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.977159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:02.390 [2024-11-18 12:11:59.977167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.439 ms 00:29:02.390 [2024-11-18 12:11:59.977174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.977251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.977260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:02.390 [2024-11-18 12:11:59.977266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:29:02.390 [2024-11-18 12:11:59.977273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.985104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.390 [2024-11-18 12:11:59.985131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:02.390 [2024-11-18 12:11:59.985138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.815 ms 00:29:02.390 [2024-11-18 12:11:59.985144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.390 [2024-11-18 12:11:59.992937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.391 [2024-11-18 12:11:59.992962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:02.391 [2024-11-18 12:11:59.992970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.767 ms 00:29:02.391 [2024-11-18 12:11:59.992975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.000796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.391 [2024-11-18 12:12:00.000822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:02.391 [2024-11-18 12:12:00.000829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.795 ms 00:29:02.391 [2024-11-18 12:12:00.000835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.008608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.391 [2024-11-18 12:12:00.008634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:02.391 [2024-11-18 12:12:00.008641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.725 ms 00:29:02.391 [2024-11-18 12:12:00.008647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.008673] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:02.391 [2024-11-18 12:12:00.008685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:02.391 [2024-11-18 12:12:00.008694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:02.391 [2024-11-18 12:12:00.008701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:02.391 [2024-11-18 12:12:00.008707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:02.391 [2024-11-18 12:12:00.008804] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:02.391 [2024-11-18 12:12:00.008810] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: e7300cbd-bb91-4fde-a772-b3a343c981cd 00:29:02.391 [2024-11-18 12:12:00.008817] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:02.391 [2024-11-18 12:12:00.008824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:02.391 [2024-11-18 12:12:00.008830] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:02.391 [2024-11-18 12:12:00.008841] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:02.391 [2024-11-18 12:12:00.008847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:02.391 [2024-11-18 12:12:00.008854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:02.391 [2024-11-18 12:12:00.008860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:02.391 [2024-11-18 12:12:00.008866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:02.391 [2024-11-18 12:12:00.008872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:02.391 [2024-11-18 12:12:00.008879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.391 [2024-11-18 12:12:00.008887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:02.391 [2024-11-18 12:12:00.008895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:29:02.391 [2024-11-18 12:12:00.008902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.019777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.391 [2024-11-18 12:12:00.019805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:02.391 [2024-11-18 12:12:00.019816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.848 ms 00:29:02.391 [2024-11-18 12:12:00.019823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.020135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.391 [2024-11-18 12:12:00.020144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:02.391 [2024-11-18 12:12:00.020151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:29:02.391 [2024-11-18 12:12:00.020157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.056258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.391 [2024-11-18 12:12:00.056293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:02.391 [2024-11-18 12:12:00.056304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.391 [2024-11-18 12:12:00.056311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.057322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.391 [2024-11-18 12:12:00.057345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:02.391 [2024-11-18 12:12:00.057353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.391 [2024-11-18 12:12:00.057361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.057448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.391 [2024-11-18 12:12:00.057457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:02.391 [2024-11-18 12:12:00.057465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.391 [2024-11-18 12:12:00.057472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.391 [2024-11-18 12:12:00.057487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.391 [2024-11-18 12:12:00.057499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:02.391 [2024-11-18 12:12:00.057506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.391 [2024-11-18 12:12:00.057513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.120278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.653 [2024-11-18 12:12:00.120314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:02.653 [2024-11-18 12:12:00.120325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.653 [2024-11-18 12:12:00.120333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.171186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.653 [2024-11-18 12:12:00.171223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:02.653 [2024-11-18 12:12:00.171232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.653 [2024-11-18 12:12:00.171238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.171308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.653 [2024-11-18 12:12:00.171317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:02.653 [2024-11-18 12:12:00.171324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.653 [2024-11-18 12:12:00.171331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.171381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.653 [2024-11-18 12:12:00.171390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:02.653 [2024-11-18 12:12:00.171400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.653 [2024-11-18 12:12:00.171424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.171500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.653 [2024-11-18 12:12:00.171508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:02.653 [2024-11-18 12:12:00.171515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.653 [2024-11-18 12:12:00.171521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.171547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.653 [2024-11-18 12:12:00.171554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:02.653 [2024-11-18 12:12:00.171560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.653 [2024-11-18 12:12:00.171568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.171614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.653 [2024-11-18 12:12:00.171622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:02.653 [2024-11-18 12:12:00.171629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.653 [2024-11-18 12:12:00.171635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.171677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:02.653 [2024-11-18 12:12:00.171685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:02.653 [2024-11-18 12:12:00.171694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:02.653 [2024-11-18 12:12:00.171700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.653 [2024-11-18 12:12:00.171806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 212.843 ms, result 0 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:03.226 Remove shared memory files 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80924 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:03.226 00:29:03.226 real 1m24.933s 00:29:03.226 user 1m55.798s 00:29:03.226 sys 0m19.675s 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:03.226 12:12:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:03.226 ************************************ 00:29:03.226 END TEST ftl_upgrade_shutdown 00:29:03.226 ************************************ 00:29:03.226 12:12:00 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:29:03.226 12:12:00 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:03.226 12:12:00 ftl -- ftl/ftl.sh@14 -- # killprocess 72145 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@952 -- # '[' -z 72145 ']' 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@956 -- # kill -0 72145 00:29:03.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72145) - No such process 00:29:03.226 Process with pid 72145 is not found 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72145 is not found' 00:29:03.226 12:12:00 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:03.226 12:12:00 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81366 00:29:03.226 12:12:00 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81366 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@833 -- # '[' -z 81366 ']' 00:29:03.226 12:12:00 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:03.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:03.226 12:12:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:03.488 [2024-11-18 12:12:00.991726] Starting SPDK v25.01-pre git sha1 403bf887a / DPDK 24.03.0 initialization... 00:29:03.488 [2024-11-18 12:12:00.991846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81366 ] 00:29:03.488 [2024-11-18 12:12:01.146963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.750 [2024-11-18 12:12:01.257872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.321 12:12:01 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:04.321 12:12:01 ftl -- common/autotest_common.sh@866 -- # return 0 00:29:04.321 12:12:01 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:04.581 nvme0n1 00:29:04.581 12:12:02 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:04.581 12:12:02 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:04.581 12:12:02 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:04.843 12:12:02 ftl -- ftl/common.sh@28 -- # stores=bc730cdf-58e9-479c-a392-d5fc697e814d 00:29:04.843 12:12:02 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:04.843 12:12:02 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc730cdf-58e9-479c-a392-d5fc697e814d 00:29:04.843 12:12:02 ftl -- ftl/ftl.sh@23 -- # killprocess 81366 00:29:04.843 12:12:02 ftl -- common/autotest_common.sh@952 -- # '[' -z 81366 ']' 00:29:04.843 12:12:02 ftl -- common/autotest_common.sh@956 -- # kill -0 81366 00:29:04.843 12:12:02 ftl -- common/autotest_common.sh@957 -- # uname 00:29:05.104 12:12:02 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:05.104 12:12:02 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81366 00:29:05.104 12:12:02 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:05.104 12:12:02 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:05.104 killing process with pid 81366 00:29:05.104 12:12:02 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81366' 00:29:05.104 12:12:02 ftl -- common/autotest_common.sh@971 -- # kill 81366 00:29:05.104 12:12:02 ftl -- common/autotest_common.sh@976 -- # wait 81366 00:29:06.492 12:12:03 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:06.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:06.492 Waiting for block devices as requested 00:29:06.492 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:06.492 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:06.752 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:06.752 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:12.045 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:12.045 Remove shared memory files 00:29:12.045 12:12:09 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:12.045 12:12:09 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:12.045 12:12:09 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:12.045 12:12:09 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:12.045 12:12:09 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:12.045 12:12:09 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:12.045 12:12:09 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:12.045 00:29:12.045 real 13m41.351s 00:29:12.045 user 15m58.711s 00:29:12.045 sys 1m21.613s 00:29:12.045 12:12:09 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:12.045 ************************************ 00:29:12.045 END TEST ftl 00:29:12.045 ************************************ 00:29:12.045 12:12:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:12.045 12:12:09 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:12.045 12:12:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:12.045 12:12:09 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:12.045 12:12:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:12.045 12:12:09 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:12.045 12:12:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:12.045 12:12:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:12.045 12:12:09 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:12.045 12:12:09 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:12.045 12:12:09 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:12.045 12:12:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:12.045 12:12:09 -- common/autotest_common.sh@10 -- # set +x 00:29:12.045 12:12:09 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:12.045 12:12:09 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:29:12.045 12:12:09 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:29:12.045 12:12:09 -- common/autotest_common.sh@10 -- # set +x 00:29:13.434 INFO: APP EXITING 00:29:13.434 INFO: killing all VMs 00:29:13.434 INFO: killing vhost app 00:29:13.434 INFO: EXIT DONE 00:29:13.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.268 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:14.268 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:14.268 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:14.268 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:14.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.792 Cleaning 00:29:14.792 Removing: /var/run/dpdk/spdk0/config 00:29:14.792 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:14.792 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:15.055 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:15.055 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:15.055 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:15.055 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:15.055 Removing: /var/run/dpdk/spdk0 00:29:15.055 Removing: /var/run/dpdk/spdk_pid56931 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57133 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57340 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57433 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57473 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57590 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57608 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57801 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57894 00:29:15.055 Removing: /var/run/dpdk/spdk_pid57987 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58096 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58193 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58227 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58269 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58334 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58412 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58843 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58896 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58948 00:29:15.055 Removing: /var/run/dpdk/spdk_pid58964 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59055 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59071 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59173 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59184 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59237 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59255 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59303 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59320 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59480 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59511 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59600 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59767 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59845 00:29:15.055 Removing: /var/run/dpdk/spdk_pid59882 00:29:15.055 Removing: /var/run/dpdk/spdk_pid60308 00:29:15.055 Removing: /var/run/dpdk/spdk_pid60402 00:29:15.055 Removing: /var/run/dpdk/spdk_pid60513 00:29:15.055 Removing: /var/run/dpdk/spdk_pid60566 00:29:15.055 Removing: /var/run/dpdk/spdk_pid60586 00:29:15.055 Removing: /var/run/dpdk/spdk_pid60670 00:29:15.055 Removing: /var/run/dpdk/spdk_pid61287 00:29:15.055 Removing: /var/run/dpdk/spdk_pid61324 00:29:15.055 Removing: /var/run/dpdk/spdk_pid61800 00:29:15.055 Removing: /var/run/dpdk/spdk_pid61899 00:29:15.055 Removing: /var/run/dpdk/spdk_pid62009 00:29:15.055 Removing: /var/run/dpdk/spdk_pid62062 00:29:15.055 Removing: /var/run/dpdk/spdk_pid62082 00:29:15.055 Removing: /var/run/dpdk/spdk_pid62113 00:29:15.055 Removing: /var/run/dpdk/spdk_pid63951 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64084 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64088 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64100 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64145 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64149 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64161 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64206 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64210 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64222 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64267 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64271 00:29:15.055 Removing: /var/run/dpdk/spdk_pid64283 00:29:15.055 Removing: /var/run/dpdk/spdk_pid65654 00:29:15.055 Removing: /var/run/dpdk/spdk_pid65747 00:29:15.055 Removing: /var/run/dpdk/spdk_pid67155 00:29:15.055 Removing: /var/run/dpdk/spdk_pid68537 00:29:15.055 Removing: /var/run/dpdk/spdk_pid68628 00:29:15.055 Removing: /var/run/dpdk/spdk_pid68704 00:29:15.055 Removing: /var/run/dpdk/spdk_pid68775 00:29:15.055 Removing: /var/run/dpdk/spdk_pid68874 00:29:15.055 Removing: /var/run/dpdk/spdk_pid68948 00:29:15.055 Removing: /var/run/dpdk/spdk_pid69097 00:29:15.055 Removing: /var/run/dpdk/spdk_pid69445 00:29:15.055 Removing: /var/run/dpdk/spdk_pid69482 00:29:15.055 Removing: /var/run/dpdk/spdk_pid69916 00:29:15.055 Removing: /var/run/dpdk/spdk_pid70101 00:29:15.055 Removing: /var/run/dpdk/spdk_pid70200 00:29:15.055 Removing: /var/run/dpdk/spdk_pid70310 00:29:15.055 Removing: /var/run/dpdk/spdk_pid70353 00:29:15.055 Removing: /var/run/dpdk/spdk_pid70384 00:29:15.055 Removing: /var/run/dpdk/spdk_pid70676 00:29:15.055 Removing: /var/run/dpdk/spdk_pid70737 00:29:15.055 Removing: /var/run/dpdk/spdk_pid70806 00:29:15.055 Removing: /var/run/dpdk/spdk_pid71189 00:29:15.055 Removing: /var/run/dpdk/spdk_pid71335 00:29:15.055 Removing: /var/run/dpdk/spdk_pid72145 00:29:15.055 Removing: /var/run/dpdk/spdk_pid72272 00:29:15.055 Removing: /var/run/dpdk/spdk_pid72441 00:29:15.055 Removing: /var/run/dpdk/spdk_pid72538 00:29:15.055 Removing: /var/run/dpdk/spdk_pid72835 00:29:15.055 Removing: /var/run/dpdk/spdk_pid73124 00:29:15.055 Removing: /var/run/dpdk/spdk_pid73480 00:29:15.055 Removing: /var/run/dpdk/spdk_pid73663 00:29:15.055 Removing: /var/run/dpdk/spdk_pid73817 00:29:15.055 Removing: /var/run/dpdk/spdk_pid73862 00:29:15.055 Removing: /var/run/dpdk/spdk_pid74077 00:29:15.316 Removing: /var/run/dpdk/spdk_pid74107 00:29:15.316 Removing: /var/run/dpdk/spdk_pid74154 00:29:15.316 Removing: /var/run/dpdk/spdk_pid74435 00:29:15.316 Removing: /var/run/dpdk/spdk_pid74671 00:29:15.316 Removing: /var/run/dpdk/spdk_pid75304 00:29:15.316 Removing: /var/run/dpdk/spdk_pid75988 00:29:15.316 Removing: /var/run/dpdk/spdk_pid76720 00:29:15.316 Removing: /var/run/dpdk/spdk_pid77590 00:29:15.316 Removing: /var/run/dpdk/spdk_pid77732 00:29:15.316 Removing: /var/run/dpdk/spdk_pid77821 00:29:15.316 Removing: /var/run/dpdk/spdk_pid78438 00:29:15.316 Removing: /var/run/dpdk/spdk_pid78494 00:29:15.316 Removing: /var/run/dpdk/spdk_pid79080 00:29:15.316 Removing: /var/run/dpdk/spdk_pid79549 00:29:15.316 Removing: /var/run/dpdk/spdk_pid80367 00:29:15.316 Removing: /var/run/dpdk/spdk_pid80500 00:29:15.316 Removing: /var/run/dpdk/spdk_pid80542 00:29:15.316 Removing: /var/run/dpdk/spdk_pid80609 00:29:15.316 Removing: /var/run/dpdk/spdk_pid80659 00:29:15.316 Removing: /var/run/dpdk/spdk_pid80723 00:29:15.316 Removing: /var/run/dpdk/spdk_pid80924 00:29:15.316 Removing: /var/run/dpdk/spdk_pid81004 00:29:15.316 Removing: /var/run/dpdk/spdk_pid81065 00:29:15.316 Removing: /var/run/dpdk/spdk_pid81121 00:29:15.316 Removing: /var/run/dpdk/spdk_pid81161 00:29:15.316 Removing: /var/run/dpdk/spdk_pid81225 00:29:15.316 Removing: /var/run/dpdk/spdk_pid81366 00:29:15.316 Clean 00:29:15.316 12:12:12 -- common/autotest_common.sh@1451 -- # return 0 00:29:15.316 12:12:12 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:15.316 12:12:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.316 12:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:15.316 12:12:12 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:15.316 12:12:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.316 12:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:15.316 12:12:12 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:15.316 12:12:12 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:15.316 12:12:12 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:15.316 12:12:12 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:15.316 12:12:12 -- spdk/autotest.sh@394 -- # hostname 00:29:15.316 12:12:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:15.578 geninfo: WARNING: invalid characters removed from testname! 00:29:42.172 12:12:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:44.720 12:12:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:47.270 12:12:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:49.840 12:12:47 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:51.753 12:12:49 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:54.299 12:12:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.848 12:12:54 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:56.848 12:12:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:56.848 12:12:54 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:56.848 12:12:54 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:56.848 12:12:54 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:56.848 12:12:54 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:56.848 + [[ -n 5023 ]] 00:29:56.848 + sudo kill 5023 00:29:56.859 [Pipeline] } 00:29:56.875 [Pipeline] // timeout 00:29:56.880 [Pipeline] } 00:29:56.894 [Pipeline] // stage 00:29:56.900 [Pipeline] } 00:29:56.914 [Pipeline] // catchError 00:29:56.923 [Pipeline] stage 00:29:56.925 [Pipeline] { (Stop VM) 00:29:56.938 [Pipeline] sh 00:29:57.223 + vagrant halt 00:30:00.531 ==> default: Halting domain... 00:30:07.132 [Pipeline] sh 00:30:07.418 + vagrant destroy -f 00:30:09.967 ==> default: Removing domain... 00:30:10.551 [Pipeline] sh 00:30:10.894 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:30:10.904 [Pipeline] } 00:30:10.919 [Pipeline] // stage 00:30:10.925 [Pipeline] } 00:30:10.939 [Pipeline] // dir 00:30:10.944 [Pipeline] } 00:30:10.958 [Pipeline] // wrap 00:30:10.965 [Pipeline] } 00:30:10.977 [Pipeline] // catchError 00:30:10.985 [Pipeline] stage 00:30:10.988 [Pipeline] { (Epilogue) 00:30:11.000 [Pipeline] sh 00:30:11.288 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:16.580 [Pipeline] catchError 00:30:16.582 [Pipeline] { 00:30:16.597 [Pipeline] sh 00:30:16.888 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:16.888 Artifacts sizes are good 00:30:16.899 [Pipeline] } 00:30:16.915 [Pipeline] // catchError 00:30:16.927 [Pipeline] archiveArtifacts 00:30:16.936 Archiving artifacts 00:30:17.032 [Pipeline] cleanWs 00:30:17.045 [WS-CLEANUP] Deleting project workspace... 00:30:17.045 [WS-CLEANUP] Deferred wipeout is used... 00:30:17.053 [WS-CLEANUP] done 00:30:17.055 [Pipeline] } 00:30:17.074 [Pipeline] // stage 00:30:17.080 [Pipeline] } 00:30:17.097 [Pipeline] // node 00:30:17.103 [Pipeline] End of Pipeline 00:30:17.144 Finished: SUCCESS